Nature Machine

[Image: Illustration by Benjamin Marra for the New York Times Magazine].

As part of a package of shorter articles in the New York Times Magazine exploring the future implications of self-driving vehicles—how they will affect urban design, popular culture, and even illegal drug activity—writer Malia Wollan focuses on “the end of roadkill.”

Her premise is fascinating. Wollan suggests that the precision driving enabled by self-driving vehicle technology could put an end to vehicular wildlife fatalities. Bears, deer, raccoons, panthers, squirrels—even stray pets—might all remain safe from our weapons-on-wheels. In the process, self-driving cars would become an unexpected ally for wildlife preservation efforts, with animal life potentially experiencing dramatic rebounds along rural and suburban roads. This will be both good and bad. One possible outcome sounds like a tragicomic Coen Brothers film about apocalyptic animal warfare in the American suburbs:

Every year in the United States, there are an estimated 1.5 million deer-vehicle crashes. If self-driving cars manage to give deer safe passage, the fast-reproducing species would quickly grow beyond the ability of the vegetation to sustain them. “You’d get a lot of starvation and mass die-offs,” says Daniel J. Smith, a conservation biologist at the University of Central Florida who has been studying road ecology for nearly three decades… “There will be deer in people’s yards, and there will be snipers in towns killing them,” [wildlife researcher Patricia Cramer] says.

While these are already interesting points, Wollan explains that, for this to come to pass, we will need to do something very strange. We will need to teach self-driving cars how to recognize nature.

“Just how deferential [autonomous vehicles] are toward wildlife will depend on human choices and ingenuity. For now,” she adds, “the heterogeneity and unpredictability of nature tends to confound the algorithms. In Australia, hopping kangaroos jumbled a self-driving Volvo’s ability to measure distance. In Boston, autonomous-vehicle sensors identified a flock of sea gulls as a single form rather than a collection of individual birds. Still, even the tiniest creatures could benefit. ‘The car could know: “O.K., this is a hot spot for frogs. It’s spring. It’s been raining. All the frogs will be moving across the road to find a mate,”’ Smith says. The vehicles could reroute to avoid flattening amphibians on that critical day.”

One might imagine that, seen through the metaphoric eyes of a car’s LiDAR array, all those hopping kangaroos appeared to be a single super-body, a unified, moving wave of flesh that would have appeared monstrous, lumpy, even grotesque. Machine horror.

What interests me here is that, in Wollan’s formulation, “nature” is that which remains heterogeneous and unpredictable—that which remains resistant to traditional representation and modeling—yet this is exactly what self-driving car algorithms will have to contend with, and what they will need to recognize and correct for, if we want them to avoid colliding with a nonhuman species.

In particular, I love Wollan’s use of the word “deferential.” The idea of cars acting with deference to the natural world, or to nonhuman species in general, opens up a whole other philosophical conversation. For example, what is the difference between deference and reverence, and how we might teach our fellow human beings, let alone our machines, to defer to, even to revere, the natural world? Put another way, what does it mean for a machine to “encounter” the wild?

Briefly, Wollan’s piece reminded me of Robert MacFarlane’s excellent book The Wild Places for a number of reasons. Recall that book’s central premise: the idea that wilderness is always closer than it appears. Roadside weeds, overgrown lots, urban hikes, peripheral species, the ground beneath your feet, even the walls of the house around you: these all constitute “wilderness” at a variety of scales, if only we could learn to recognize them as such. Will self-driving cars spot “nature” or “wilderness” in sites where humans aren’t conceptually prepared to see it?

The challenge of teaching a car how to recognize nature thus takes on massive and thrilling complexity here, all wrapped up in the apparently simple goal of ending roadkill. It’s about where machines end and animals begin—or perhaps how technology might begin before the end of wilderness.

In any case, Wollan’s short piece is worth reading in full—and don’t miss a much earlier feature she wrote on the subject of roadkill for the New York Times back in 2010.

Piscine Virtual Reality

[Image: From “Putting the Fish in the Fish Tank: Immersive VR for Animal Behavior Experiments” by Sachit Butail, Amanda Chicoli, and Derek A. Paley].

I’ve had this story bookmarked for the past four years, and a tweet this morning finally gave me an excuse to write about it.

Back in 2012, we read, researchers at Harvard University found a way to fool a paralyzed fish into thinking it was navigating a virtual spatial environment. They then studied its brain during this trip that went nowhere—this virtual, unmoving navigation—in order to understand the “neuronal dynamics” of spatial perception.

As Noah Gray wrote at the time, deliberately highlighting the study’s unnerving surreality, “Paralyzed fish navigates virtual environment while we watch its brain.” Gray then compared it to The Matrix.

The paper itself explains that, when “paralyzed animals interact fictively with a virtual environment,” it results in what are called “fictive swims.”

To study motor adaptation, we used a closed-loop paradigm and simulated a one-dimensional environment in which the fish is swept backwards by a virtual water flow, a motion that the fish was able to compensate for by swimming forwards, as in the optomotor response. In the fictive virtual-reality setup, this corresponds to a whole-field visual stimulus that is moving forwards but that can be momentarily accelerated backwards by a fictive swim of the fish, so that the fish can stabilize its virtual location over time. Remarkably, paralyzed larval zebrafish behaved readily in this closed-loop paradigm, showing similar behavior to freely swimming fish that are exposed to whole-field motion, and were not noticeably compromised by the absence of vestibular, proprioceptive and somatosensory feedback that accompanies unrestrained swimming.

Imagine being that fish; imagine realizing that the spatial environment you think you’re moving through is actually some sort of induced landscape put there purely for the sake of studying your neural reaction to it.

Ten years from now, experimental architecture-induction labs pop up at universities around the world, where people sit, strapped into odd-looking chairs, appearing to be asleep. They are navigating labyrinths, a scientist whispers to you, trying not to disturb them. You look around the room and see books full of mazes spread across a table, six-foot-tall full-color holograms of the human brain, and dozens of HD computer screens flashing with graphs of neural stimulation. They are walking through invisible buildings, she says.

[Image: From “Putting the Fish in the Fish Tank: Immersive VR for Animal Behavior Experiments” by Sachit Butail, Amanda Chicoli, and Derek A. Paley].

In any case, the fish-in-virtual-reality setup was apparently something of a trend in 2012, because there was also a paper published that year called “Putting the Fish in the Fish Tank: Immersive VR for Animal Behavior Experiments,” this time by researchers at the University of Maryland. Their goal was to “startle” fish using virtual reality:

We describe a virtual-reality framework for investigating startle-response behavior in fish. Using real-time three dimensional tracking, we generate looming stimuli at a specific location on a computer screen, such that the shape and size of the looming stimuli change according to the fish’s perspective and location in the tank.

As they point out, virtual reality can be a fantastic tool for studying spatial perception. VR, they write, “provides a novel opportunity for high-output biological data collection and allows for the manipulation of sensory feedback. Virtual reality paradigms have been harnessed as an experimental tool to study spatial navigation and memory in rats, flight control in flies and balance studies in humans.”

But why stop at fish? Why stop at fish tanks? Why not whole virtual landscapes and ecosystems?

Imagine a lost bear running around a forest somewhere, slipping between birch trees and wildflowers, the sun a blinding light that stabs down through branches in disorienting flares. There are jagged rocks and dew-covered moss everywhere. But it’s not a forest. The bear looks around. There are no other animals, and there haven’t been for days. Perhaps not for years. It can’t remember. It can’t remember how it got there. It can’t remember where to go.

It’s actually stuck in a kind of ursine Truman Show: an induced forest of virtual spatial stimuli. And the bear isn’t running at all; it’s strapped down inside an MRI machine in Baltimore. Its brain is being watched—as its brain watches the well-rendered polygons of these artificial rocks and trees.

(Fish tank story spotted via Clive Thompson. Vaguely related: The Subterranean Machine Dreams of a Paralyzed Youth in Los Angeles).