Shuttered

[Image: Cabin by Zecc Architects and Roel van Norel; photo by Laura Mallonee, courtesy Dwell].

Here’s another cabin, this time by Zecc Architects and Roel van Norel for a client in the Netherlands.

[Image: Cabin by Zecc Architects and Roel van Norel; photo by Laura Mallonee, courtesy Dwell].

“Building atop the foundation of a previous greenhouse was a cost-cutting measure,” according to Dwell; this “allowed the project to be considered a renovation and thereby qualify for a temporary tax reduction. Its traditional, gabled form also pays homage to the original structure.”

[Images: Cabin by Zecc Architects and Roel van Norel; photos by Laura Mallonee, courtesy Dwell].

The shutters are awesome, I think, and the effect at night is otherworldly, like an inhabited lantern.

[Image: Cabin by Zecc Architects and Roel van Norel; photo by Laura Mallonee, courtesy Dwell].

For more photos of the project, check out Dwell or ArchDaily.

(I am under the impression that these photos were taken by Laura Mallonee, but the attribution at Dwell leaves this somewhat ambiguous; apologies if I have misattributed someone else’s work).

Lodge

[Image: The “Bjellandsbu” cabin, named after its client, by Snøhetta; photo by James Silverman, courtesy Snøhetta].

I have cabins, retreats, and small houses on the brain, and this remote Norwegian hunting lodge designed by Snøhetta, complete with green roof and local timber, is one of many recent projects that caught my eye.

[Image: “Bjellandsbu” by Snøhetta; photo by James Silverman, courtesy Snøhetta].

According to the architects, the structure is “accessible only by foot or horseback,” and apparently features enough bunk beds to sleep up to 21 people.

[Image: “Bjellandsbu” by Snøhetta; photo by James Silverman, courtesy Snøhetta].

While at first glance, you might think it’s a relic from a J.R.R. Tolkien-infused 1970s counterculture, it was actually completed in 2013.

[Image: “Bjellandsbu” by Snøhetta; photo by James Silverman, courtesy Snøhetta].

For more shots of the cabin in the wild, meanwhile, check out the #bjellandsbu hashtag on Instagram.

(All photos in this post by James Silverman, courtesy of Snøhetta).

Worth the Weight

In the midst of a long New York Times article about the serial theft of offensive cyberweapons from the National Security Agency, there’s a brief but interesting image. “Much of [a core N.S.A. group’s] work is labeled E.C.I., for ‘exceptionally controlled information,’ material so sensitive it was initially stored only in safes,” the article explains. “When the cumulative weight of the safes threatened the integrity of N.S.A.’s engineering building a few years ago, one agency veteran said, the rules were changed to allow locked file cabinets.”

It’s like some undiscovered Italo Calvino short story: an agency physically deformed by the gravitational implications of its secrets, its buildings now bulbous and misshapen as the literal weight of its mission continues to grow.

Nature Machine

[Image: Illustration by Benjamin Marra for the New York Times Magazine].

As part of a package of shorter articles in the New York Times Magazine exploring the future implications of self-driving vehicles—how they will affect urban design, popular culture, and even illegal drug activity—writer Malia Wollan focuses on “the end of roadkill.”

Her premise is fascinating. Wollan suggests that the precision driving enabled by self-driving vehicle technology could put an end to vehicular wildlife fatalities. Bears, deer, raccoons, panthers, squirrels—even stray pets—might all remain safe from our weapons-on-wheels. In the process, self-driving cars would become an unexpected ally for wildlife preservation efforts, with animal life potentially experiencing dramatic rebounds along rural and suburban roads. This will be both good and bad. One possible outcome sounds like a tragicomic Coen Brothers film about apocalyptic animal warfare in the American suburbs:

Every year in the United States, there are an estimated 1.5 million deer-vehicle crashes. If self-driving cars manage to give deer safe passage, the fast-reproducing species would quickly grow beyond the ability of the vegetation to sustain them. “You’d get a lot of starvation and mass die-offs,” says Daniel J. Smith, a conservation biologist at the University of Central Florida who has been studying road ecology for nearly three decades… “There will be deer in people’s yards, and there will be snipers in towns killing them,” [wildlife researcher Patricia Cramer] says.

While these are already interesting points, Wollan explains that, for this to come to pass, we will need to do something very strange. We will need to teach self-driving cars how to recognize nature.

“Just how deferential [autonomous vehicles] are toward wildlife will depend on human choices and ingenuity. For now,” she adds, “the heterogeneity and unpredictability of nature tends to confound the algorithms. In Australia, hopping kangaroos jumbled a self-driving Volvo’s ability to measure distance. In Boston, autonomous-vehicle sensors identified a flock of sea gulls as a single form rather than a collection of individual birds. Still, even the tiniest creatures could benefit. ‘The car could know: “O.K., this is a hot spot for frogs. It’s spring. It’s been raining. All the frogs will be moving across the road to find a mate,”’ Smith says. The vehicles could reroute to avoid flattening amphibians on that critical day.”

One might imagine that, seen through the metaphoric eyes of a car’s LiDAR array, all those hopping kangaroos appeared to be a single super-body, a unified, moving wave of flesh that would have appeared monstrous, lumpy, even grotesque. Machine horror.

What interests me here is that, in Wollan’s formulation, “nature” is that which remains heterogeneous and unpredictable—that which remains resistant to traditional representation and modeling—yet this is exactly what self-driving car algorithms will have to contend with, and what they will need to recognize and correct for, if we want them to avoid colliding with a nonhuman species.

In particular, I love Wollan’s use of the word “deferential.” The idea of cars acting with deference to the natural world, or to nonhuman species in general, opens up a whole other philosophical conversation. For example, what is the difference between deference and reverence, and how we might teach our fellow human beings, let alone our machines, to defer to, even to revere, the natural world? Put another way, what does it mean for a machine to “encounter” the wild?

Briefly, Wollan’s piece reminded me of Robert MacFarlane’s excellent book The Wild Places for a number of reasons. Recall that book’s central premise: the idea that wilderness is always closer than it appears. Roadside weeds, overgrown lots, urban hikes, peripheral species, the ground beneath your feet, even the walls of the house around you: these all constitute “wilderness” at a variety of scales, if only we could learn to recognize them as such. Will self-driving cars spot “nature” or “wilderness” in sites where humans aren’t conceptually prepared to see it?

The challenge of teaching a car how to recognize nature thus takes on massive and thrilling complexity here, all wrapped up in the apparently simple goal of ending roadkill. It’s about where machines end and animals begin—or perhaps how technology might begin before the end of wilderness.

In any case, Wollan’s short piece is worth reading in full—and don’t miss a much earlier feature she wrote on the subject of roadkill for the New York Times back in 2010.

The Ghost of Cognition Past, or Thinking Like An Algorithm

[Image: Wiring the ENIAC; via Wired]

One of many things I love about writing—that is, engaging in writing as an activity—is how it facilitates a discovery of connections between otherwise unrelated things. Writing reveals and even relies upon analogies, metaphors, and unexpected similarities: there is resonance between a story in the news and a medieval European folktale, say, or between a photo taken in a war-wrecked city and an 18th-century landscape painting. These sorts of relations might remain dormant or unnoticed until writing brings them to the foreground: previously unconnected topics and themes begin to interact, developing meanings not present in those original subjects on their own.

Wildfires burning in the Arctic might bring to mind infernal images from Paradise Lost or even intimations of an unwritten J.G. Ballard novel, pushing a simple tale of natural disaster to new symbolic heights, something mythic and larger than the story at hand. Learning that U.S. Naval researchers on the Gulf Coast have used the marine slime of a “300-million-year old creature” to develop 21st-century body armor might conjure images from classical mythology or even from H.P. Lovecraft: Neptunian biotech wed with Cthulhoid military terror.

In other words, writing means that one thing can be crosswired or brought into contrast with another for the specific purpose of fueling further imaginative connections, new themes to be pulled apart and lengthened, teased out to form plots, characters, and scenes.

In addition, a writer of fiction might stage an otherwise straightforward storyline in an unexpected setting, in order to reveal something new about both. It’s a hard-boiled detective thriller—set on an international space station. It’s a heist film—set at the bottom of the sea. It’s a procedural missing-person mystery—set on a remote military base in Afghanistan.

Thinking like a writer would mean asking why things have happened in this way and not another—in this place and not another—and to see what happens when you begin to switch things around. It’s about strategic recombination.

I mention all this after reading a new essay by artist and critic James Bridle about algorithmic content generation as seen in children’s videos on YouTube. The piece is worth reading for yourself, but I wanted to highlight a few things here.

[Image: Wiring the ENIAC; via Wired]

In brief, the essay suggests that an increasingly odd, even nonsensical subcategory of children’s video is emerging on YouTube. The content of these videos, Bridle writes, comes from what he calls “keyword/hashtag association.” That is, popular keyword searches have become a stimulus for producing new videos whose content is reverse-engineered from those searches.

To use an entirely fictional example of what this means, let’s imagine that, following a popular Saturday Night Live sketch, millions of people begin Googling “Pokémon Go Ewan McGregor.” In the emerging YouTube media ecology that Bridle documents, someone with an entrepreneurial spirit would immediately make a Pokémon Go video featuring Ewan McGregor both to satisfy this peculiar cultural urge and to profit from the anticipated traffic.

Content-generation through keyword mixing is “a whole dark art unto itself,” Bridle suggests. As a particular keyword or hashtag begins to trend, “content producers pile onto it, creating thousands and thousands more of these videos in every possible iteration.” Imagine Ewan McGregor playing Pokémon Go, forever.

What’s unusual here, however, and what Bridle specifically highlights in his essay, is that this creative process is becoming automated: machine-learning algorithms are taking note of trending keyword searches or popular hashtag combinations, then recommending the production of content to match those otherwise arbitrary sets. For Bridle, the results verge on the incomprehensible—less Big Data, say, than Big Dada.

This is by no means new. Recall the origin of House of Cards on Netflix. Netflix learned from its massive trove of consumer data that its customers liked, among other things, David Fincher films, political thrillers, and the actor Kevin Spacey. As David Carr explained for the New York Times back in 2013, this suggested the outline of a possible series: “With those three circles of interest, Netflix was able to find a Venn diagram intersection that suggested that buying the series would be a very good bet on original programming.”

In other words, House of Cards was produced because it matched a data set, an example of “keyword/hashtag association” becoming video.

The question here would be: what if, instead of a human producer, a machine-learning algorithm had been tasked with analyzing Netflix consumer data and generating an idea for a new TV show? What if that recommendation algorithm didn’t quite understand which combinations would be good or worth watching? It’s not hard to imagine an unwatchably surreal, even uncanny television show resulting from this, something that seems to make more sense as a data-collection exercise than as a coherent plot—yet Bridle suggests that this is exactly what’s happening in the world of children’s videos online.

[Image: From Metropolis].

In some of these videos, Bridle explains, keyword-based programming might mean something as basic as altering a few words in a script, then having actors playfully act out those new scenarios. Actors might incorporate new toys, new types of candy, or even a particular child’s name: “Matt” on a “donkey” at “the zoo” becomes “Matt” on a “horse” at “the zoo” becomes “Carla” on a “horse” at “home.” Each variant keyword combination then results in its own short video, and each of these videos can be monetized. Future such recombinations are infinite.

In an age of easily produced digital animations, Bridle adds, these sorts of keyword micro-variants can be produced both extremely quickly and very nearly automatically. Some YouTube producers have even eliminated “human actors” altogether, he writes, “to create infinite reconfigurable versions of the same videos over and over again. What is occurring here is clearly automated. Stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos.”

Bridle notes with worry that it is nearly impossible here “to parse out the gap between human and machine.”

Going further, he suggests that the automated production of new videos based on popular search terms has resulted in scenes so troubling that children should not be exposed to them—but, interestingly, Bridle’s reaction here seems to be based on those videos’ content. That is, the videos feature animated characters appearing without heads, or kids being buried alive in sandboxes, or even the painful sounds of babies crying.

What I think is unsettling here is slightly different, on the other hand. The content, in my opinion, is simply strange: a kind of low-rent surrealism for kids, David Lynch-lite for toddlers. For thousands of years, western folktales have featured cannibals, incest, haunted houses, even John Carpenter-like biological transformations, from woman to tree, or from man to pig and back again. Children burn to death on chariots in the sky or sons fall from atmospheric heights into the sea. These myths seem more nightmarish—on the level of content—than some of Bridle’s chosen YouTube videos.

Instead, I would argue, what’s disturbing here is what the content suggests about how things should be connected. The real risk would seem to be that children exposed to recommendation algorithms at an early age might begin to emulate them cognitively, learning how to think, reason, and associate based on inhuman leaps of machine logic.

Bridle’s inability “to parse out the gap between human and machine” might soon apply not just to these sorts of YouTube videos but to the children who grew up watching them.

[Image: Replicants in Blade Runner].

One of my favorite scenes in Umberto Eco’s novel Foucault’s Pendulum is when a character named Jacopo Belbo describes different types of people. Everyone in the world, Belbo suggests, is one of only four types: there are “cretins, fools, morons, and lunatics.”

In the context of the present discussion, it is interesting to note that these categories are defined by modes of reasoning. For example, “Fools don’t claim that cats bark,” Belbo explains, “but they talk about cats when everyone else is talking about dogs.” They get their references wrong.

It is Eco’s “lunatic,” however, who offers a particularly interesting character type for us to consider: the lunatic, we read, is “a moron who doesn’t know the ropes. The moron proves his [own] thesis; he has a logic, however twisted it may be. The lunatic, on the other hand, doesn’t concern himself at all with logic; he works by short circuits. For him, everything proves everything else. The lunatic is all idée fixe, and whatever he comes across confirms his lunacy. You can tell him by the liberties he takes with common sense, by his flashes of inspiration…”

It might soon be time to suggest a fifth category, something beyond the lunatic, where thinking like an algorithm becomes its own strange form of reasoning, an alien logic gradually accepted as human over two or three generations to come.

Assuming I have read Bridle’s essay correctly—and it is entirely possible I have not—he seems disturbed by the content of these videos. I think the more troubling aspect, however, is in how they suggest kids should think. They replace narrative reason with algorithmic recommendation, connecting events and objects in weird, illogical bursts lacking any semblance of internal coherence, where the sudden appearance of something completely irrelevant can nonetheless be explained because of its keyword-search frequency. Having a conversation with someone who thinks like this—who “thinks” like this—would be utterly alien, if not logically impossible.

So, to return to this post’s beginning, one of the thrills of thinking like a writer, so to speak, is precisely in how it encourages one to bring together things that might not otherwise belong on the same page, and to work toward understanding why these apparently unrelated subjects might secretly be connected.

But what is thinking like an algorithm?

It will be interesting to see if algorithmically assembled material can still offer the sort of interpretive challenge posed by narrative writing, or if the only appropriate response to the kinds of content Bridle describes will be passive resignation, indifference, knowing that a data set somewhere produced a series of keywords and that the story before you goes no deeper than that. So you simply watch the next video. And the next. And the next.

Crash Ballet

I had a surprisingly interesting conversation with the guy cutting my hair the other day. It turned out he had studied dance in college, but, roughly fifteen years ago, had been forced to find other work as both age and a nagging injury took their toll.

He mentioned various forms of movement therapy that exist for coping with, and even reversing, these sorts of injuries, which led to a conversation about styles of dance that might have been specifically invented not as art but as medicine, as a means of physical convalescence for aging performers, even choreographic styles devised for performance by injured dancers.

My barber then referred to a particular type of movement—whose name I can’t remember—that was all about using the body’s skeleton, rather than its musculature, for standing up and down, as well as something about spreading energy into the floor, not resisting gravity, etc., but the way he described it reminded me of studies I had read that suggested drunk people are often less injured in car crashes than their sober counterparts because their bodies don’t resist the movement. They are simply flung along with the motion of the vehicle. Sober people should thus learn not to clench up and go rigid if they’re about to be in a car accident; they should instead loosen up and, in effect, go with the flow.

Note, of course, that this is not scientific advice; I was speculating with someone in a barber shop.

Nevertheless, we went on to discuss the fact that car accidents are so common in American culture today that it would not be out of the question to devise some sort of movement-preparation course for kids to study in gym class—like tai chi for car wrecks—to help them safely interact with crashing vehicles. A kind of preparatory crash ballet.

Would this be more interesting or fun than dodgeball, or floor hockey, or whatever else it is that kids do in gym class these days? Teach kids how to be flung through windshields, how to roll out of collapsing houses in an earthquake, how to jump from burning buildings, or other survival techniques for the everyday catastrophes that might exist for all of us, hiding just around the corner.

Fab

[Image: “The Sphere” by Oliver Tessman, Mark Fahlbusch, Klaus Bollinger, and Manfred Grohmann].

The Bartlett School of Architecture has made all three volumes of Fabricate, their excellent series of books and conference proceedings dating back to 2011, free to download.

[Image: Matter Design’s La Voûte de LeFevre, Banvard Gallery (2012)].

More than 700 pages’ worth of technical experiments, speculative construction processes, new industrial tools, and one-off prototypes, the books are a gold mine for research and development.

[Image: Greg Lynn’s “Embryological House,” Venice Biennale (2002)].

3D printers, buoyant robots, multi-axis milling machines, directed insect-secretion, cellular automata, semi-autonomous bricklaying, self-assembling endoskeletons, drone weaving—it’s hard to go wrong with even the most cursory skimming of each volume, and that doesn’t even mention the essays and interviews.

[Image: “Custom forming tool mounted on the six-axis robotic arm,” via Fabricate 2014]

Download each book—from 2017, 2014, and 2011—and be prepared to lose a few days reading through them.

Extraction Town

[Image: Empty homes in Picher, Oklahoma; photo by BLDGBLOG].

On the way west, I managed to stop by the town of Picher, Oklahoma, the subject of a new exhibition featuring photographs by Todd Stewart.

Picher is something like the Centralia of Oklahoma, where Centralia is the town in Pennsylvania that has been slowly abandoned over a generation due to coal mine fires burning away beneath its streets. In Picher, however, it’s not coal smoke but collapsing lead mines that have led to a forced buy-out and evacuation, a haunting process tragically assisted in 2008 when a massive tornado hit town, ripping apart many of its remaining houses and buildings.

Today, Picher is not entirely empty, but it has become more of a macabre curiosity on the state’s border with Kansas, its quiet streets overgrown and surrounded by looming piles of “chat,” or mine tailings, alpine forms that give the landscape its toxic profile.

[Image: Picher, surrounded by its toxic artificial landforms; via Google Maps].

The Washington Post visited the town back in 2007. “Signs of Picher’s impending death are everywhere,” they wrote at the time. “Many stores along Highway 69, the town’s main street, are empty, their windows coated with a layer of grime, virtually concealing the abandoned merchandise still on display. Trucks traveling along the highway are diverted around Picher for fear that the hollowed-out mines under the town would cause the streets to collapse under the weight of big rigs.” Note that this was written a year before the tornado.

Oklahoma native Allison Meier has written up Todd Stewart’s exhibition, including a longer, horrific backstory to the town, with red rivers of acidic water “belching” up from abandoned mines, kids playing in sandboxes of powdered lead, and horses poisoned by the runoff.

“The poisoning of Picher may seem like a local story,” Meier writes, “and, indeed, remains little known on a national level. Yet the state of Oklahoma continues to practice environmentally hazardous extraction, including fracking for gas. And in the United States, the promotion of toxic industry—even if it results in the destruction of the very place it is supporting—endures.”

Here’s a link to the actual exhibition, and you can buy a copy of Todd Stewart’s book here. Wired also visited Picher a few years back, if you’re looking for more.

Angeleno Redux

[Image: Underground tennis courts in a limestone mine and refrigeration complex in Missouri].

It’s been a long month, but my wife and I have packed up and left New York, endlessly bubble-wrapping things while watching Midnight Run, Collateral, Chinatown, and other L.A.-themed movies on a laptop in an empty room, to head west again to Los Angeles, where we finally arrived today.

We visited the Cahokia Mounds, a heavily eroded indigenous North American city that, at its height, was larger than London, part of a Wisconsin-to-Louisiana band of settlements sculpted from mud and clay. The remains of history are not necessarily built with stone and timber—let alone steel and glass—but might exist in the form of oddly sloped hillsides or gardens long ago left untended.

[Image: Hiking around Cahokia Mounds].

Along the way, we managed to see the total eclipse in Missouri, sitting on a picnic blanket in a park south of St. Louis, people around us crying, yelling “Look at that!,” laughing, cheering like it was a football game, a day before driving further southwest to explore food-refrigeration caverns in active limestone mines for Nicky’s book.

That’s where we stumbled on the tennis courts pictured at the top of this post, at least seventy feet below ground, complete with a wall of framed photos showing previous champions of the underworld leagues, as we drove around for an hour or two through genuinely huge subterranean naves and corridors, with not-yet-renovated sections of the mine—millions of square feet—hidden behind titanic yellow curtains.

[Image: Behind these curtains are millions—of square-feet of void].

We listened to S-Town. We had breakfast in Oklahoma City. We made it to New Mexico to hike up a 10,000-year-old volcano with an ice cave frozen at a permanent 31º in one of its half-collapsed lava tubes where we met another couple who had driven up from Arizona “to get out of the heat.”

[Image: Bandera Volcano, New Mexico].

We then spent three days in Flagstaff to sleep, watch GLOW, and inadvertently off-road on our quest to do some hiking, up fire roads, up canyons behind Sedona, up hills in the rain, looking north toward the cinder cones of dead volcanoes that we visited a few years ago for Venue, where, in the 1960s, NASA recreated the surface of the moon using timed explosions.

[Image: Hiking outside Flagstaff].

In any case, we’re now back in Los Angeles, the greatest city in the United States, the one that most perversely fulfills whatever strange promises this country offers, and we’ll be here for the long haul. In fact, there’s no real reason to post this, other than: why not? But, if you live in L.A., or anywhere in California, perhaps we’ll cross paths soon.

Paleoalgorithmica

[Image: Sunrise, via PublicDomainPictures.net].

A short item in The Economist last month suggested that town planners could simply bypass their own aesthetic responses to a landscape and turn instead to an algorithm to design “scenic” locales.

Researchers at the Warwick Business School, we read, “have adapted a computer program called Places to recognize beautiful landscapes, whether natural or artificial, using the criteria that a human beholder would employ.” Acting as a kind of sentient Hallmark card, Places has been “optimized to recognize geographical features. [Head researcher Chanuki Seresinhe] and her team taught the program to identify such things as mountains, beaches and fields, and various sorts of buildings, in pictures presented to it.”

Most of the results are not surprising. Lakes and horizons scored well. So did valleys and snowy mountains. In artificial landscapes castles, churches and cottages were seen as scenic. Hospitals, garages and motels not so much. Ms. Seresinhe’s analysis did, however, confirm one important but non-obvious finding from her previous study. Green spaces are not, in and of themselves, scenic. To be so they need to involve contours and trees.

While this sounds ridiculous on its face, suggesting a saccharine world of endless Viagra ad backdrops, the article includes an unexpected detail at the end that makes the whole thing seem much stranger.

There, The Economist points our attention briefly to “an idea promulgated 30 years ago by Edward Wilson, an evolutionary biologist at Harvard University. He suggested that the sorts of landscapes people prefer—and which they sculpt their parks and gardens to resemble—are those that echo the African savannahs in which Homo sapiens evolved. Gently undulating ground with a mixture of trees, shrubs and open spaces, in other words (though, ideally, without the accompanying dangerous wild animals).”

This newfangled computer program, then, could be accused of simply repeating the observational landscape prejudices of our own pre-human ancestors. It’s as if we have been carefully stewarding into existence a world of thinking machines and semi-autonomous neural networks—only to find that they don’t think like envoys of the future, like inscrutable alien subjectivities set loose inside silicon.

Rather, they are earlier versions of ourselves, like a patient hospitalized for dementia becoming more childlike as they age. Not after, but before. Paleoalgorithmica.