Ghosts Only Cars Can Perceive

[Image: An otherwise unrelated image of car-based LiDAR navigation, via Singularity Hub].

There was a lot of design interest a few years back in a product that allowed cyclists to project their own bike lanes, an idea that is still being honed today.

Transportation infrastructure that only exists in the form of a projection is a great analogy for the state of cycling in the U.S. today, but what we might call projected infrastructure—road signs, bike lanes, and crosswalks that aren’t really there—can apparently also be weaponized, turned against the machine-sensing systems that navigate and steer driverless vehicles.

Researchers at Ben-Gurion University, for example, have shown that fake, drone-projected street signs can spoof driverless cars. Amazingly, these fake street signs can apparently exist for only 100 milliseconds and still be read as “real” by a car’s sensing package. They are like flickering ghosts only cars can perceive, navigational dazzle imperceptible to humans.

As if pitching a scene for the next Mission: Impossible film, Ars Technica explains that “a drone might acquire and shadow a target car, then wait for an optimal time to spoof a sign in a place and at an angle most likely to affect the target with minimal ‘collateral damage’ in the form of other nearby cars also reading the fake sign.” One car out of twenty suddenly takes an unexpected turn.

Although this spoof is, for now, entirely visual, “a more advanced attacker might combine GNSS [Global Navigation Satellite System] spoofing and perhaps even active radar countermeasures in a very serious bid at confusing its target,” Ars Technica adds. Cars, lost in their own technical hallucinations, being steered to unknown destinations, unaware that they’ve even strayed.

Spells Against Autonomy

[Image: From “Autonomous Trap 001” by James Bridle].

By now, you’ve probably seen James Bridle’s “Autonomous Trap 001,” a magic salt circle for ensnaring the sensory systems of autonomous vehicles.

By surrounding a self-driving vehicle with a mandala of inescapable roadway markings—after all, even a person wearing a t-shirt with a STOP sign on it can affect the navigational capabilities of autonomous cars—the project explores the possibility that these machines could be trapped, frozen in a space of infinite indecision, as if locked in place by magic.

[Image: From “Autonomous Trap 001” by James Bridle].

Five years from now, rogue highway painting crews well-versed in ritual magic and LiDAR sigils shut down all machine-vision systems on the west coast…

As Bridle explained to Creators, there should be at least a handful of more examples of this automotive counter-wizardry to come.

(Earlier on BLDGBLOG: Robot War and the Future of Perceptual Self-Deception. See also The Dream Life of Driverless Cars.)

Computational Romanticism and the Dream Life of Driverless Cars

[Image by ScanLAB Projects for The New York Times Magazine].

Understanding how driverless cars see the world also means understanding how they mis-see things: the duplications, glitches, and scanning errors that, precisely because of their deviation from human perception, suggest new ways of interacting with and experiencing the built environment.

Stepping into or through the scanners of autonomous vehicles in order to look back at the world from their perspective is the premise of a short feature I’ve written for this weekend’s edition of The New York Times Magazine.

For a new series of urban images, Matt Shaw and Will Trossell of ScanLAB Projects tuned, tweaked, and augmented a LiDAR unit—one of the many tools used by self-driving vehicles to navigate—and turned it instead into something of an artistic device for experimentally representing urban space.

The resulting shots show the streets, bridges, and landmarks of London transformed through glitches into “a landscape of aging monuments and ornate buildings, but also one haunted by duplications and digital ghosts”:

The city’s double-­decker buses, scanned over and over again, become time-­stretched into featureless mega-­structures blocking whole streets at a time. Other buildings seem to repeat and stutter, a riot of Houses of Parliament jostling shoulder to shoulder with themselves in the distance. Workers setting out for a lunchtime stroll become spectral silhouettes popping up as aberrations on the edge of the image. Glass towers unravel into the sky like smoke. Trossell calls these “mad machine hallucinations,” as if he and Shaw had woken up some sort of Frankenstein’s monster asleep inside the automotive industry’s most advanced imaging technology.

Along the way I had the pleasure of speaking to Illah Nourbakhsh, a professor of robotics at Carnegie Mellon and the author of Robot Futures, a book I previously featured here on the blog back in 2013. Nourbakhsh is impressively adept at generating potential narrative scenarios—speculative accidents, we might call them—in which technology might fail or be compromised, and his take on the various perceptual risks or interpretive short-comings posed by autonomous vehicle technology was fascinating.

[Image by ScanLAB Projects for The New York Times Magazine].

Alas, only one example from our long conversation made it into the final article, but it is worth repeating. Nourbakhsh used “the metaphor of the perfect storm to describe an event so strange that no amount of programming or image-­recognition technology can be expected to understand it”:

Imagine someone wearing a T-­shirt with a STOP sign printed on it, he told me. “If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore—well, now your car just sees a stop sign. The chances of all that happening are diminishingly small—it’s very, very unlikely—but the problem is we will have millions of these cars. The very unlikely will happen all the time.”

The most interesting takeaway from this sort of scenario, however, is not that the technology is inherently flawed or limited, but that these momentary mirages and optical illusions are not, in fact, ephemeral: in a very straightforward, functional sense, they become a physical feature of the urban landscape because they exert spatial influences on the machines that (mis-)perceive them.

Nourbakhsh’s STOP sign might not “actually” be there—but it is actually there if it causes a self-driving car to stop.

Immaterial effects of machine vision become digitally material landmarks in the city, affecting traffic and influencing how machines safely operate. But, crucially, these are landmarks that remain invisible to human beings—and it is ScanLAB’s ultimate representational goal here to explore what it means to visualize them.

While, in the piece, I compare ScanLAB’s work to the heyday of European Romanticism—that ScanLAB are, in effect, documenting an encounter with sublime and inhuman landscapes that, here, are not remote mountain peaks but the engineered products of computation—writer Asher Kohn suggested on Twitter that, rather, it should be considered “Italian futurism made real,” with sweeping scenes of streets and buildings unraveling into space like digital smoke. It’s a great comparison, and worth developing at greater length.

For now, check out the full piece over at The New York Times Magazine: “The Dream Life of Driverless Cars.”

Drive-By Archaeology

[Image: From a patent filed by MIT, courtesy U.S. Patent and Trademark Office].

The technical systems by which autonomous, self-driving vehicles will safely navigate city streets are usually presented as some combination of real-time scanning and detailed mnemonic map or virtual reference model created for that vehicle.

As Alexis Madrigal has written for The Atlantic, autonomous vehicles are, in essence, always driving within a virtual world—like Freudian machines, they are forever unable to venture outside a sphere of their own projections:

The key to Google’s success has been that these cars aren’t forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel. And these are not any old maps. They are not even the rich, road-logic-filled maps of consumer-grade Google Maps.
They’re probably best thought of as ultra-precise digitizations of the physical world, all the way down to tiny details like the position and height of every single curb. A normal digital map would show a road intersection; these maps would have a precision measured in inches.

The vehicle can thus respond to the city insofar as its own spatial expectations are never sufficiently contradicted by the evidence at hand: if the city, as scanned by the vehicle’s array of sensors and instruments, corresponds to the vehicle’s own internal expectations, then it can make the next rational decision (to turn a corner, stop at an intersection, wait for a passing train, etc.).

However, I was very interested to see that an MIT research team led by Byron Stanley had applied for a patent last autumn that would allow autonomous vehicles to guide themselves using ground-penetrating radar. It is the subterranean realm that they would thus be peering into, in addition to the plein air universe of curb heights and Yield signs, reading the underworld for its own peculiar landmarks.

[Image: From a patent filed by MIT, courtesy U.S. Patent and Trademark Office].

How would it work? Imagine, the MIT team suggests, that your autonomous vehicle is either in a landscape blanketed in snow. It is volumetrically deformed by all that extra mass and thus robbed not only of accurate points of measurement but also of any, if not all, computer-recognizable landmarks. Or, he adds, imagine that you have passed into a “GPS-denied area.”

In either case, you and your self-driving vehicle run the very real risk of falling off the map altogether, stuck in a machine that cannot find its way forward and, for all intents and purposes, can no longer even tell road from landscape.

[Image: From a patent filed by MIT, courtesy U.S. Patent and Trademark Office].

Stanley’s group has thus come up with the interesting suggestion that you could simply give autonomous vehicles the ability to see through the earth’s surface and scan for recognizable systems of pipework or other urban infrastructure down below. Your vehicle could then just follow those systems through the obscuring layers of rain, snow, or even tumbleweed to its eventual destination.

These would be cars attuned to the “subsurface region,” as the patent describes it, falling somewhere between urban archaeology and speleo-cartography.

In fact, with only the slightest tweaking of this technology and you could easily imagine a scenario in which your vehicle would more or less seek out and follow archaeological features in the ground. Picture something like an enormous basement in Rome or central London—or perhaps a strange variation on the city built entirely for autonomous vehicles at the University of Michigan. It is a vast expanse of concrete built—with great controversy—over an ancient site of incredible archaeological richness.

Climbing into a small autonomous vehicle, however, and avidly referring to the interactive menu presented on a touchscreen dashboard, you feel the vehicle begin to move, inching forward into the empty room. The trick is that it is navigating according to the remnant outlines of lost foundations and buried structures hidden in the ground around you, like a boat passing over shipwrecks hidden in the still but murky water.

The vehicle shifts and turns, hovers and circles back again, outlining where buildings once stood. It is acting out a kind of invisible architecture of the city, where its routes are not roads at all but the floor plans of old buildings and, rather than streets or parking lots, you circulate through and pause within forgotten rooms buried in the ground somewhere below.

In this “subsurface region” that only your vehicle’s radar eyes can see, your car finds navigational clarity, calmly poking along the secret forms of the city.

In any case, for more on the MIT patent, check out the U.S. Patent and Trademark Office.

(Via New Scientist).