Afghan Twin

[Image: Screen-grab from an interview between John Peel and Aphex Twin, filmed in Cornwall’s Gwennap Pit; spotted via Xenogothic].

An anecdote I often use while teaching design classes—but also something I first read so long ago, I might actually be making the whole thing up—comes from an old interview with Richard D. James, aka Aphex Twin. I’ve tried some very, very lazy Googling to find the original source, but, frankly, I like the version I remember so much that I’m not really concerned with verifying its details.

In any case, the story goes like this: in an interview with a music magazine, published I believe some time in the late-1990s, James claimed that he had been hired to remix a track by—if I remember correctly—The Afghan Whigs. Whether or not it was The Afghan Whigs, the point was that James reported being so unable to come up with new ideas for the band’s music that he simply sped their original song up to the length of a high-hat, then composed a new track of his own using that sound.

The upshot is that, if you were to slow down the resulting Aphex Twin track by several orders of magnitude, you would hear an Afghan Whigs song (or whatever) playing, in its entirety, every four or five minutes, bursting surreally out of the electronic blur before falling silent again, like a tide. Just cycling away, over and over again.

What’s amazing about this, at least for me, is in the possibilities it implies for everything from sonic camouflage—such as hiding acoustic information inside a mere beep in the overall background sound of a room—to art installations.

Imagine a scenario, for example, in which every little bleep and bloop in a song (or TV commercial or blockbuster film or ringtone) somewhere is actually an entire other song accelerated, or even what this could do outside the field of acoustics altogether. An entire film, for example, sped up to a brief flash of light: you film the flash, slow down the resulting footage, and you’ve got 2001 playing in a public space, in full, hours compressed into a microsecond. It’s like the exact opposite of Bryan Boyer’s Very Slow Movie Player, with very fast nano-cinemas hidden in plain sight.

The world of sampling litigation has been widely covered—in which predatory legal teams exhaustively listen to new musical releases, flagging unauthorized uses of sampled material—but, for this, it’s like you’d need time cops, temporal attorneys slowing things down dramatically out of some weird fear that their client’s music has been used as a high-hat sound…

Anyway, for context, think of the inaudible commands used to trigger Internet-of-Things devices: “The ultrasonic pitches are embedded into TV commercials or are played when a user encounters an ad displayed in a computer browser,” Ars Technica reported back in 2015. “While the sound can’t be heard by the human ear, nearby tablets and smartphones can detect it. When they do, browser cookies can now pair a single user to multiple devices and keep track of what TV commercials the person sees, how long the person watches the ads, and whether the person acts on the ads by doing a Web search or buying a product.”

Or, as the New York Times wrote in 2018, “researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online—simply with music playing over the radio.”

Now imagine some malevolent Aphex Twin doing audio-engineering work for a London advertising firm—or for the intelligence services of an adversarial nation-state—embedding ultra-fast sonic triggers in the audio environment. Only, here, it would actually be some weird dystopia in which the Internet of Things is secretly run by ubiquitous Afghan Whigs songs being played at 3,600-times their intended speed.

[Don’t miss Marc Weidenbaum’s book on Aphex Twin’s Selected Ambient Works Vol. 2.]

Geofencing and Investigatory Datasheds

There’s a lot to write about “geofencing” as a law enforcement practice, but, for now, I’ll just link to this piece in the New York Times about the use of device-tracking in criminal investigations.

There, we read about something called Sensorvault: “Sensorvault, according to Google employees, includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.”

To access Sensorvault, members of law enforcement can use a “geofence warrant.” This is a hybrid digital/geographic search warrant that will “specify an area and a time period” for which “Google gathers information from Sensorvault about the devices that were there. It labels them with anonymous ID numbers, and detectives look at locations and movement patterns to see if any appear relevant to the crime. Once they narrow the field to a few devices they think belong to suspects or witnesses, Google reveals the users’ names and other information.”

In other words, you can isolate a specific private yard, public park, city street, or even several residential blocks during a particular period of time, then—with the right warrant—every device found within or crossing through that window can be revealed.

To a certain extent, the notion of a “crime scene” has thus been digitally expanded, taking on a kind of data shadow, as someone simply driving down a street or sitting in a park one day with their phone out is now within the official dataprint of an investigation. Or perhaps datashed—as in watershed—is a better metaphor.

But this, of course, is where things get strange, from both a political and a narrative point of view. Political, because why not just issue a permanent, standing geofence warrant for certain parts of the city in order to track entire targeted populations, whether they’re a demographic group or members of a political opposition? And narrative, because how does this change what it means to witness something, to overhear something, to be privy to something, to be an accomplice or unwilling participant? And is it you or your device that will be able to recount what really occurred?

From a narrative point of view, in other words, anyone whose phone was within the datashed of an event becomes a witness or participant, a character, someone who an author—let alone an authority—now needs to track.

(For more thoughts on witnessing, narrative, and authors/authorities, I wrote a piece for The Atlantic last year that might be of interest.)

Cereal Bags of the Stratosphere

[Image: One of Google’ Loon balloons; screen grab from this video].

“The lab is 250 feet wide, 200 feet deep, and 70 feet tall. It’s a massive space where Google’s scientists can simulate the negative-60 degrees Celsius temperature of the stratosphere.” Alexis Madrigal on Google’s Project Loon balloons.

The future of the internet is cereal bag technology in the sky.

Computational Romanticism and the Dream Life of Driverless Cars

[Image by ScanLAB Projects for The New York Times Magazine].

Understanding how driverless cars see the world also means understanding how they mis-see things: the duplications, glitches, and scanning errors that, precisely because of their deviation from human perception, suggest new ways of interacting with and experiencing the built environment.

Stepping into or through the scanners of autonomous vehicles in order to look back at the world from their perspective is the premise of a short feature I’ve written for this weekend’s edition of The New York Times Magazine.

For a new series of urban images, Matt Shaw and Will Trossell of ScanLAB Projects tuned, tweaked, and augmented a LiDAR unit—one of the many tools used by self-driving vehicles to navigate—and turned it instead into something of an artistic device for experimentally representing urban space.

The resulting shots show the streets, bridges, and landmarks of London transformed through glitches into “a landscape of aging monuments and ornate buildings, but also one haunted by duplications and digital ghosts”:

The city’s double-­decker buses, scanned over and over again, become time-­stretched into featureless mega-­structures blocking whole streets at a time. Other buildings seem to repeat and stutter, a riot of Houses of Parliament jostling shoulder to shoulder with themselves in the distance. Workers setting out for a lunchtime stroll become spectral silhouettes popping up as aberrations on the edge of the image. Glass towers unravel into the sky like smoke. Trossell calls these “mad machine hallucinations,” as if he and Shaw had woken up some sort of Frankenstein’s monster asleep inside the automotive industry’s most advanced imaging technology.

Along the way I had the pleasure of speaking to Illah Nourbakhsh, a professor of robotics at Carnegie Mellon and the author of Robot Futures, a book I previously featured here on the blog back in 2013. Nourbakhsh is impressively adept at generating potential narrative scenarios—speculative accidents, we might call them—in which technology might fail or be compromised, and his take on the various perceptual risks or interpretive short-comings posed by autonomous vehicle technology was fascinating.

[Image by ScanLAB Projects for The New York Times Magazine].

Alas, only one example from our long conversation made it into the final article, but it is worth repeating. Nourbakhsh used “the metaphor of the perfect storm to describe an event so strange that no amount of programming or image-­recognition technology can be expected to understand it”:

Imagine someone wearing a T-­shirt with a STOP sign printed on it, he told me. “If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore—well, now your car just sees a stop sign. The chances of all that happening are diminishingly small—it’s very, very unlikely—but the problem is we will have millions of these cars. The very unlikely will happen all the time.”

The most interesting takeaway from this sort of scenario, however, is not that the technology is inherently flawed or limited, but that these momentary mirages and optical illusions are not, in fact, ephemeral: in a very straightforward, functional sense, they become a physical feature of the urban landscape because they exert spatial influences on the machines that (mis-)perceive them.

Nourbakhsh’s STOP sign might not “actually” be there—but it is actually there if it causes a self-driving car to stop.

Immaterial effects of machine vision become digitally material landmarks in the city, affecting traffic and influencing how machines safely operate. But, crucially, these are landmarks that remain invisible to human beings—and it is ScanLAB’s ultimate representational goal here to explore what it means to visualize them.

While, in the piece, I compare ScanLAB’s work to the heyday of European Romanticism—that ScanLAB are, in effect, documenting an encounter with sublime and inhuman landscapes that, here, are not remote mountain peaks but the engineered products of computation—writer Asher Kohn suggested on Twitter that, rather, it should be considered “Italian futurism made real,” with sweeping scenes of streets and buildings unraveling into space like digital smoke. It’s a great comparison, and worth developing at greater length.

For now, check out the full piece over at The New York Times Magazine: “The Dream Life of Driverless Cars.”