Robot War and the Future of Perceptual Deception

tesla
[Image: A diagram of the accident site, via the Florida Highway Patrol].

One of the most remarkable details of last week’s fatal collision, involving a tractor trailer and a Tesla electric car operating in self-driving mode, was the fact that the car apparently mistook the side of the truck for the sky.

As Tesla explained in a public statement following the accidental death, the car’s autopilot was unable to see “the white side of the tractor trailer against a brightly lit sky”—which is to say, it was unable to differentiate the two.

The truck was not seen as a discrete object, in other words, but as something indistinguishable from the larger spatial environment. It was more like an elision.

Examples like this are tragic, to be sure, but they are also technologically interesting, in that they give momentary glimpses of where robotic perception has failed. Hidden within this, then, are lessons not just for how vehicle designers and computers scientists alike could make sure this never happens again, but also precisely the opposite: how we could design spatial environments deliberately to deceive, misdirect, or otherwise baffle these sorts of semi-autonomous machines.

For all the talk of a “robot-readable world,” in other words, it is interesting to consider a world made deliberately illegible to robots, with materials used for throwing off 3D cameras or LiDAR, either through excess reflectivity or unexpected light-absorption.

Last summer, in a piece for New Scientist, I interviewed a robotics researcher named John Rogers, at Georgia Tech. Rogers pointed out that the perceptual needs of robots will have more and more of an effect on how architectural interiors are designed and built in the first place. Quoting that article at length:

In a detail that has implications beyond domestic healthcare, Rogers also discovered that some interiors confound robots altogether. Corridors that are lined with rubber sheeting to protect against damage from wayward robots—such as those in his lab—proved almost impossible to navigate. Why? Rubber absorbs light and prevents laser-based navigational systems from relaying spatial information back to the robot.
Mirrors and other reflective materials also threw off his robots’ ability to navigate. “It actually appeared that there was a virtual world beyond the mirror,” says Rogers. The illusion made his robots act as if there were a labyrinth of new rooms waiting to be entered and explored. When reflections from your kitchen tiles risk disrupting a robot’s navigational system, it might be time to rethink the very purpose of interior design.

I mention all this for at least two reasons.

1) It is obvious by now that the American highway system, as well as all of the vehicles that will be permitted to travel on it, will be remade as one of the first pieces of truly robot-legible public infrastructure. It will transition from being a “dumb” system of non-interactive 2D surfaces to become an immersive spatial environment filled with volumetric sign-systems meant for non-human readers. It will be rebuilt for perceptual systems other than our own.

2) Finding ways to throw-off self-driving robots will be more than just a harmless prank or even a serious violation of public safety; it will become part of a much larger arsenal for self-defense during war. In other words, consider the points raised by John Rogers, above, but in a new context: you live in a city under attack by a foreign military whose use of semi-autonomous machines requires defensive means other than—or in addition to—kinetic firepower. Wheeled and aerial robots alike have been deployed.

One possible line of defense—among many, of course—would be to redesign your city, even down to the interior of your own home, such that machine vision is constantly confused there. You thus rebuild the world using light-absorbing fabrics and reflective ornament, installing projections and mirrors, screens and smoke. Or “stealth objects” and radar-baffling architectural geometries. A military robot wheeling its way into your home thus simply gets lost there, stuck in a labyrinth of perceptual convolution and reflection-implied rooms that don’t exist.

In any case, I suppose the question is: if, today, a truck can blend-in with the Florida sky, and thus fatally disable a self-driving machine, what might we learn from this event in terms of how to deliberately confuse robotic military systems of the future?

We had so-called “dazzle ships” in World War I, for example, and the design of perceptually baffling military camouflage continues to undergo innovation today; but what is anti-robot architectural design, or anti-robot urban planning, and how could it be strategically deployed as a defensive tactic in war?

Computational Romanticism and the Dream Life of Driverless Cars

[Image by ScanLAB Projects for The New York Times Magazine].

Understanding how driverless cars see the world also means understanding how they mis-see things: the duplications, glitches, and scanning errors that, precisely because of their deviation from human perception, suggest new ways of interacting with and experiencing the built environment.

Stepping into or through the scanners of autonomous vehicles in order to look back at the world from their perspective is the premise of a short feature I’ve written for this weekend’s edition of The New York Times Magazine.

For a new series of urban images, Matt Shaw and Will Trossell of ScanLAB Projects tuned, tweaked, and augmented a LiDAR unit—one of the many tools used by self-driving vehicles to navigate—and turned it instead into something of an artistic device for experimentally representing urban space.

The resulting shots show the streets, bridges, and landmarks of London transformed through glitches into “a landscape of aging monuments and ornate buildings, but also one haunted by duplications and digital ghosts”:

The city’s double-­decker buses, scanned over and over again, become time-­stretched into featureless mega-­structures blocking whole streets at a time. Other buildings seem to repeat and stutter, a riot of Houses of Parliament jostling shoulder to shoulder with themselves in the distance. Workers setting out for a lunchtime stroll become spectral silhouettes popping up as aberrations on the edge of the image. Glass towers unravel into the sky like smoke. Trossell calls these “mad machine hallucinations,” as if he and Shaw had woken up some sort of Frankenstein’s monster asleep inside the automotive industry’s most advanced imaging technology.

Along the way I had the pleasure of speaking to Illah Nourbakhsh, a professor of robotics at Carnegie Mellon and the author of Robot Futures, a book I previously featured here on the blog back in 2013. Nourbakhsh is impressively adept at generating potential narrative scenarios—speculative accidents, we might call them—in which technology might fail or be compromised, and his take on the various perceptual risks or interpretive short-comings posed by autonomous vehicle technology was fascinating.

[Image by ScanLAB Projects for The New York Times Magazine].

Alas, only one example from our long conversation made it into the final article, but it is worth repeating. Nourbakhsh used “the metaphor of the perfect storm to describe an event so strange that no amount of programming or image-­recognition technology can be expected to understand it”:

Imagine someone wearing a T-­shirt with a STOP sign printed on it, he told me. “If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore—well, now your car just sees a stop sign. The chances of all that happening are diminishingly small—it’s very, very unlikely—but the problem is we will have millions of these cars. The very unlikely will happen all the time.”

The most interesting takeaway from this sort of scenario, however, is not that the technology is inherently flawed or limited, but that these momentary mirages and optical illusions are not, in fact, ephemeral: in a very straightforward, functional sense, they become a physical feature of the urban landscape because they exert spatial influences on the machines that (mis-)perceive them.

Nourbakhsh’s STOP sign might not “actually” be there—but it is actually there if it causes a self-driving car to stop.

Immaterial effects of machine vision become digitally material landmarks in the city, affecting traffic and influencing how machines safely operate. But, crucially, these are landmarks that remain invisible to human beings—and it is ScanLAB’s ultimate representational goal here to explore what it means to visualize them.

While, in the piece, I compare ScanLAB’s work to the heyday of European Romanticism—that ScanLAB are, in effect, documenting an encounter with sublime and inhuman landscapes that, here, are not remote mountain peaks but the engineered products of computation—writer Asher Kohn suggested on Twitter that, rather, it should be considered “Italian futurism made real,” with sweeping scenes of streets and buildings unraveling into space like digital smoke. It’s a great comparison, and worth developing at greater length.

For now, check out the full piece over at The New York Times Magazine: “The Dream Life of Driverless Cars.”

Drive-By Archaeology

[Image: From a patent filed by MIT, courtesy U.S. Patent and Trademark Office].

The technical systems by which autonomous, self-driving vehicles will safely navigate city streets are usually presented as some combination of real-time scanning and detailed mnemonic map or virtual reference model created for that vehicle.

As Alexis Madrigal has written for The Atlantic, autonomous vehicles are, in essence, always driving within a virtual world—like Freudian machines, they are forever unable to venture outside a sphere of their own projections:

The key to Google’s success has been that these cars aren’t forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel. And these are not any old maps. They are not even the rich, road-logic-filled maps of consumer-grade Google Maps.
They’re probably best thought of as ultra-precise digitizations of the physical world, all the way down to tiny details like the position and height of every single curb. A normal digital map would show a road intersection; these maps would have a precision measured in inches.

The vehicle can thus respond to the city insofar as its own spatial expectations are never sufficiently contradicted by the evidence at hand: if the city, as scanned by the vehicle’s array of sensors and instruments, corresponds to the vehicle’s own internal expectations, then it can make the next rational decision (to turn a corner, stop at an intersection, wait for a passing train, etc.).

However, I was very interested to see that an MIT research team led by Byron Stanley had applied for a patent last autumn that would allow autonomous vehicles to guide themselves using ground-penetrating radar. It is the subterranean realm that they would thus be peering into, in addition to the plein air universe of curb heights and Yield signs, reading the underworld for its own peculiar landmarks.

[Image: From a patent filed by MIT, courtesy U.S. Patent and Trademark Office].

How would it work? Imagine, the MIT team suggests, that your autonomous vehicle is either in a landscape blanketed in snow. It is volumetrically deformed by all that extra mass and thus robbed not only of accurate points of measurement but also of any, if not all, computer-recognizable landmarks. Or, he adds, imagine that you have passed into a “GPS-denied area.”

In either case, you and your self-driving vehicle run the very real risk of falling off the map altogether, stuck in a machine that cannot find its way forward and, for all intents and purposes, can no longer even tell road from landscape.

[Image: From a patent filed by MIT, courtesy U.S. Patent and Trademark Office].

Stanley’s group has thus come up with the interesting suggestion that you could simply give autonomous vehicles the ability to see through the earth’s surface and scan for recognizable systems of pipework or other urban infrastructure down below. Your vehicle could then just follow those systems through the obscuring layers of rain, snow, or even tumbleweed to its eventual destination.

These would be cars attuned to the “subsurface region,” as the patent describes it, falling somewhere between urban archaeology and speleo-cartography.

In fact, with only the slightest tweaking of this technology and you could easily imagine a scenario in which your vehicle would more or less seek out and follow archaeological features in the ground. Picture something like an enormous basement in Rome or central London—or perhaps a strange variation on the city built entirely for autonomous vehicles at the University of Michigan. It is a vast expanse of concrete built—with great controversy—over an ancient site of incredible archaeological richness.

Climbing into a small autonomous vehicle, however, and avidly referring to the interactive menu presented on a touchscreen dashboard, you feel the vehicle begin to move, inching forward into the empty room. The trick is that it is navigating according to the remnant outlines of lost foundations and buried structures hidden in the ground around you, like a boat passing over shipwrecks hidden in the still but murky water.

The vehicle shifts and turns, hovers and circles back again, outlining where buildings once stood. It is acting out a kind of invisible architecture of the city, where its routes are not roads at all but the floor plans of old buildings and, rather than streets or parking lots, you circulate through and pause within forgotten rooms buried in the ground somewhere below.

In this “subsurface region” that only your vehicle’s radar eyes can see, your car finds navigational clarity, calmly poking along the secret forms of the city.

In any case, for more on the MIT patent, check out the U.S. Patent and Trademark Office.

(Via New Scientist).