Geofencing and Investigatory Datasheds

There’s a lot to write about “geofencing” as a law enforcement practice, but, for now, I’ll just link to this piece in the New York Times about the use of device-tracking in criminal investigations.

There, we read about something called Sensorvault: “Sensorvault, according to Google employees, includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.”

To access Sensorvault, members of law enforcement can use a “geofence warrant.” This is a hybrid digital/geographic search warrant that will “specify an area and a time period” for which “Google gathers information from Sensorvault about the devices that were there. It labels them with anonymous ID numbers, and detectives look at locations and movement patterns to see if any appear relevant to the crime. Once they narrow the field to a few devices they think belong to suspects or witnesses, Google reveals the users’ names and other information.”

In other words, you can isolate a specific private yard, public park, city street, or even several residential blocks during a particular period of time, then—with the right warrant—every device found within or crossing through that window can be revealed.

To a certain extent, the notion of a “crime scene” has thus been digitally expanded, taking on a kind of data shadow, as someone simply driving down a street or sitting in a park one day with their phone out is now within the official dataprint of an investigation. Or perhaps datashed—as in watershed—is a better metaphor.

But this, of course, is where things get strange, from both a political and a narrative point of view. Political, because why not just issue a permanent, standing geofence warrant for certain parts of the city in order to track entire targeted populations, whether they’re a demographic group or members of a political opposition? And narrative, because how does this change what it means to witness something, to overhear something, to be privy to something, to be an accomplice or unwilling participant? And is it you or your device that will be able to recount what really occurred?

From a narrative point of view, in other words, anyone whose phone was within the datashed of an event becomes a witness or participant, a character, someone who an author—let alone an authority—now needs to track.

(For more thoughts on witnessing, narrative, and authors/authorities, I wrote a piece for The Atlantic last year that might be of interest.)

Cereal Bags of the Stratosphere

[Image: One of Google’ Loon balloons; screen grab from this video].

“The lab is 250 feet wide, 200 feet deep, and 70 feet tall. It’s a massive space where Google’s scientists can simulate the negative-60 degrees Celsius temperature of the stratosphere.” Alexis Madrigal on Google’s Project Loon balloons.

The future of the internet is cereal bag technology in the sky.

Computational Romanticism and the Dream Life of Driverless Cars

[Image by ScanLAB Projects for The New York Times Magazine].

Understanding how driverless cars see the world also means understanding how they mis-see things: the duplications, glitches, and scanning errors that, precisely because of their deviation from human perception, suggest new ways of interacting with and experiencing the built environment.

Stepping into or through the scanners of autonomous vehicles in order to look back at the world from their perspective is the premise of a short feature I’ve written for this weekend’s edition of The New York Times Magazine.

For a new series of urban images, Matt Shaw and Will Trossell of ScanLAB Projects tuned, tweaked, and augmented a LiDAR unit—one of the many tools used by self-driving vehicles to navigate—and turned it instead into something of an artistic device for experimentally representing urban space.

The resulting shots show the streets, bridges, and landmarks of London transformed through glitches into “a landscape of aging monuments and ornate buildings, but also one haunted by duplications and digital ghosts”:

The city’s double-­decker buses, scanned over and over again, become time-­stretched into featureless mega-­structures blocking whole streets at a time. Other buildings seem to repeat and stutter, a riot of Houses of Parliament jostling shoulder to shoulder with themselves in the distance. Workers setting out for a lunchtime stroll become spectral silhouettes popping up as aberrations on the edge of the image. Glass towers unravel into the sky like smoke. Trossell calls these “mad machine hallucinations,” as if he and Shaw had woken up some sort of Frankenstein’s monster asleep inside the automotive industry’s most advanced imaging technology.

Along the way I had the pleasure of speaking to Illah Nourbakhsh, a professor of robotics at Carnegie Mellon and the author of Robot Futures, a book I previously featured here on the blog back in 2013. Nourbakhsh is impressively adept at generating potential narrative scenarios—speculative accidents, we might call them—in which technology might fail or be compromised, and his take on the various perceptual risks or interpretive short-comings posed by autonomous vehicle technology was fascinating.

[Image by ScanLAB Projects for The New York Times Magazine].

Alas, only one example from our long conversation made it into the final article, but it is worth repeating. Nourbakhsh used “the metaphor of the perfect storm to describe an event so strange that no amount of programming or image-­recognition technology can be expected to understand it”:

Imagine someone wearing a T-­shirt with a STOP sign printed on it, he told me. “If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore—well, now your car just sees a stop sign. The chances of all that happening are diminishingly small—it’s very, very unlikely—but the problem is we will have millions of these cars. The very unlikely will happen all the time.”

The most interesting takeaway from this sort of scenario, however, is not that the technology is inherently flawed or limited, but that these momentary mirages and optical illusions are not, in fact, ephemeral: in a very straightforward, functional sense, they become a physical feature of the urban landscape because they exert spatial influences on the machines that (mis-)perceive them.

Nourbakhsh’s STOP sign might not “actually” be there—but it is actually there if it causes a self-driving car to stop.

Immaterial effects of machine vision become digitally material landmarks in the city, affecting traffic and influencing how machines safely operate. But, crucially, these are landmarks that remain invisible to human beings—and it is ScanLAB’s ultimate representational goal here to explore what it means to visualize them.

While, in the piece, I compare ScanLAB’s work to the heyday of European Romanticism—that ScanLAB are, in effect, documenting an encounter with sublime and inhuman landscapes that, here, are not remote mountain peaks but the engineered products of computation—writer Asher Kohn suggested on Twitter that, rather, it should be considered “Italian futurism made real,” with sweeping scenes of streets and buildings unraveling into space like digital smoke. It’s a great comparison, and worth developing at greater length.

For now, check out the full piece over at The New York Times Magazine: “The Dream Life of Driverless Cars.”