Driving on Mars and the Theater of Machines

[Image: Self-portrait on Mars; via NASA].

Science has published a short profile of a woman named Vandi Verma. She is “one of the few people in the world who is qualified to drive a vehicle on Mars.”

Vera has driven a series of remote vehicles on another planet over the years, including, most recently, the Curiosity rover.

[Image: Another self-portrait on Mars; via NASA].

Driving it involves a strange sequence of simulations, projections, and virtual maps that are eventually beamed out from planet to planet, the robot at the other end acting like a kind of wheeled marionette as it then spins forward along its new route. Here is a long description of the process from Science:

Each day, before the rover shuts down for the frigid martian night, it calls home, Verma says. Besides relaying scientific data and images it gathered during the day, it sends its precise coordinates. They are downloaded into simulation software Verma helped write. The software helps drivers plan the rover’s route for the next day, simulating tricky maneuvers. Operators may even perform a dry run with a duplicate rover on a sandy replica of the planet’s surface in JPL’s Mars Yard. Then the full day’s itinerary is beamed to the rover so that it can set off purposefully each dawn.

What’s interesting here is not just the notion of an interplanetary driver’s license—a qualification that allows one to control wheeled machines on other planets—but the fact that there is still such a clear human focus at the center of the control process.

The fact that Science‘s profile of Verma begins with her driving agricultural equipment on her family farm in India, an experience that quite rapidly scaled up to the point of guiding rovers across the surface of another world entirely, only reinforces the sense of surprise here—that farm equipment in India and NASA’s Mars rover program bear technical similarities.

They are, in a sense, interplanetary cousins, simultaneously conjoined and air-gapped across two worlds..

[Image: A glimpse of the dreaming; photo by Alexis Madrigal, courtesy of The Atlantic].

Compare this to the complex process of programming and manufacturing a driverless vehicle. In an interesting piece published last summer, Alexis Madrigal explained that Google’s self-driving cars operate inside a Borgesian 1:1 map of the physical world, a “virtual track” coextensive with the landscape you and I stand upon and inhabit.

“Google has created a virtual world out of the streets their engineers have driven,” Madrigal writes. And, like the Mars rover program we just read about, “They pre-load the data for the route into the car’s memory before it sets off, so that as it drives, the software knows what to expect.”

The software knows what to expect because the vehicle, in a sense, is not really driving on the streets outside Google’s Mountain View campus; it is driving in a seamlessly parallel simulation of those streets, never leaving the world of the map so precisely programmed into its software.

Like Christopher Walken’s character in the 1983 film Brainstorm, Google’s self-driving cars are operating inside a topographical dream state, we might say, seeing only what the headpiece allows them to see.

[Image: Navigating dreams within dreams: (top) from Brainstorm; (bottom) a Google self-driving car, via Google and re:form].

Briefly, recall a recent essay by Karen Levy and Tim Hwang called “Back Stage at the Machine Theater.” That piece looked at the atavistic holdover of old control technologies—such as steering wheels—in vehicles that are actually computer-controlled.

There is no need for a human-manipulated steering wheel, in other words, other than to offer a psychological point of focus for the vehicle’s passengers, to give them the feeling that they can still take over.

This is the “machine theater” that the title of their essay refers to: a dramaturgy made entirely of technical interfaces that deliberately produce a misleading illusion of human control. These interfaces are “placebo buttons,” they write, that transform all but autonomous technical systems into “theaters of volition” that still appear to be under manual guidance.

I mention this essay here because the Science piece with which this post began also explains that NASA’s rover program is being pushed toward a state of greater autonomy.

“One of Verma’s key research goals,” we read, “has been to give rovers greater autonomy to decide on a course of action. She is now working on a software upgrade that will let Curiosity be true to its name. It will allow the rover to autonomously select interesting rocks, stopping in the middle of a long drive to take high-resolution images or analyze a rock with its laser, without any prompting from Earth.”

[Image: Volitional portraiture on Mars; via NASA].

The implication here is that, as the Mars rover program becomes “self-driving,” it will also be transformed into a vast “theater of volition,” in Levy’s and Hwang’s formulation: that Earth-bound “drivers” might soon find themselves reporting to work simply to flip placebo levers and push placebo buttons as these vehicles go about their own business far away.

It will become more ritual than science, more icon than instrument—a strangely passive experience, watching a distant machine navigate simulated terrain models and software packages coextensive with the surface of Mars.

Greek Gods, Moles, and Robot Oceans

[Image: The Very Low Frequency antenna field at Cutler, Maine, a facility for communicating with at-sea submarine crews].

There have been about a million stories over the past few weeks that I’ve been dying to write about, but I’ll just have to clear through a bunch here in one go.

1) First up is a fascinating request for proposals from the Defense Advanced Research Projects Agency, or DARPA, who is looking to build a “Positioning System for Deep Ocean Navigation.” It has the handy acronym of POSYDON.

POSYDON will be “an undersea system that provides omnipresent, robust positioning” in the deep ocean either for crewed submarines or for autonomous seacraft. “DARPA envisions that the POSYDON program will distribute a small number of acoustic sources, analogous to GPS satellites, around an ocean basin,” but I imagine there is some room for creative maneuvering there.

The idea of an acoustic deep-sea positioning system that operates similar to GPS is pretty interesting to imagine, especially considering the strange transformations sound undergoes as it is transmitted through water. To establish accurately that a U.S. submarine has, in fact, heard an acoustic beacon and that its apparent distance from that point is not being distorted by intervening water temperature, ocean currents, or even the large-scale presence of marine life is obviously quite an extraordinary challenge.

As DARPA points out, without such a system in place, “undersea vehicles must regularly surface to receive GPS signals and fix their position, and this presents a risk of detection.” The ultimate goal, then, would be to launch ultra-longterm undersea missions, even establish permanently submerged robotic networks that have no need to breach the ocean’s surface. Cthulhoid, they will forever roam the deep.

[Image: An unmanned underwater vehicle; U.S. Navy photo by S. L. Standifird].

If you think you’ve got what it takes, click over to DARPA and sign up.

2) A while back, I downloaded a free academic copy of a fascinating book called Space-Time Reference Systems by Michael Soffel and Ralf Langhans.

Their book “presents an introduction to the problem of astronomical–geodetical space–time reference systems,” or radically offworld navigation reference points for when a craft is, in effect, well beyond any known or recognizable landmarks in space. Think of it as a kind of new longitude problem.

The book is filled with atomic clocks, quasars potentially repurposed as deep-space orientation beacons, the long-term shifting of “astronomical reference frames,” and page after page of complex math I make no claim to understand.

However, I mention this here because the POSYDON program is almost the becoming-cosmic of the ocean: that is, the depths of the sea reimagined as a vast and undifferentiated space within which mostly robotic craft will have to orient themselves on long missions. For a robotic submarine, the ocean is its universe.

3) The POSYDON program is just one part of a much larger militarization of the deep seas. Consider the fact that the U.S. Office of Naval Research is hoping to construct permanent “hubs” on the seafloor for recharging robot submarines.

These “hubs” would be “unmanned, underwater pods where robots can recharge undetected—and securely upload the intelligence they’ve gathered to Navy networks.” Hubs will be places where “unmanned underwater vehicles (UUVs) can dock, recharge, upload data and download new orders, and then be on their way.”

“You could keep this continuous swarm of UUVs [Unmanned Underwater Vehicles] wherever you wanted to put them… basically indefinitely, as long as you’re rotating (some) out periodically for mechanical issues,” a Naval war theorist explained to Breaking Defense.

The ultimate vision is a kind of planet-spanning robot constellation: “The era of lone-wolf submarines is giving away [sic] to underwater networks of manned subs, UUVs combined with seafloor infrastructure such as hidden missile launchers—all connected to each other and to the rest of the force on the surface of the water, in the air, in space, and on land.” This would include, for example, the “upward falling payloads” program described on BLDGBLOG a few years back.

Even better, from a military communications perspective, these hubs would also act as underwater relay points for broadcasting information through the water—or what we might call the ocean as telecommunications medium—something that currently relies on ultra-low frequency radio.

There is much more detail on this over at Breaking Defense.

4) Last summer, my wife and I took a quick trip up to Maine where we decided to follow a slight detour after hiking Mount Katahdin to drive by the huge antenna field at Cutler, a Naval communications station found way out on a tiny peninsula nearly on the border with Canada.

[Image: The antenna field at Cutler, Maine].

We talked to the security guard for a while about life out there on this little peninsula, but we were unable to get a tour of the actual facility, sadly. He mostly joked that the locals have a lot of conspiracy theories about what the towers are actually up to, including their potential health effects—which isn’t entirely surprising, to be honest, considering the massive amounts of energy used there and the frankly otherworldly profile these antennas have on the horizon—but you can find a lot of information about the facility online.

So what does this thing do? “The Navy’s very-low-frequency (VLF) station at Cutler, Maine, provides communication to the United States strategic submarine forces,” a January 1998 white paper called “Technical Report 1761” explains. It is basically an east coast version of the so-called Project Sanguine, a U.S. Navy program from the 1980s that “would have involved 41 percent of Wisconsin,” turning the Cheese State into a giant military antenna.

Cutler’s role in communicating with submarines may or may not have come to an end, making it more of a research facility today, but the idea that, even if this came to an end with the Cold War, isolated radio technicians on a foggy peninsula in Maine were up there broadcasting silent messages into the ocean that were meant to be heard only by U.S. submarine crews pinging around in the deepest canyons of the Atlantic is both poetic and eerie.

[Image: A diagram of the antennas, from the aforementioned January 1998 research paper].

The towers themselves are truly massive, and you can easily see them from nearby roads, if you happen to be anywhere near Cutler, Maine.

In any case, I mention all this because behemoth facilities such as these could be made altogether redundant by autonomous underwater communication hubs, such as those described by Breaking Defense.

5) “The robots are winning!” Daniel Mendelsohn wrote in The New York Review of Books earlier this month. The opening paragraphs of his essay are is awesome, and I wish I could just republish the whole thing:

We have been dreaming of robots since Homer. In Book 18 of the Iliad, Achilles’ mother, the nymph Thetis, wants to order a new suit of armor for her son, and so she pays a visit to the Olympian atelier of the blacksmith-god Hephaestus, whom she finds hard at work on a series of automata:

…He was crafting twenty tripods
to stand along the walls of his well-built manse,
affixing golden wheels to the bottom of each one
so they might wheel down on their own [automatoi] to the gods’ assembly
and then return to his house anon: an amazing sight to see.

These are not the only animate household objects to appear in the Homeric epics. In Book 5 of the Iliad we hear that the gates of Olympus swivel on their hinges of their own accord, automatai, to let gods in their chariots in or out, thus anticipating by nearly thirty centuries the automatic garage door. In Book 7 of the Odyssey, Odysseus finds himself the guest of a fabulously wealthy king whose palace includes such conveniences as gold and silver watchdogs, ever alert, never aging. To this class of lifelike but intellectually inert household helpers we might ascribe other automata in the classical tradition. In the Argonautica of Apollonius of Rhodes, a third-century-BC epic about Jason and the Argonauts, a bronze giant called Talos runs three times around the island of Crete each day, protecting Zeus’s beloved Europa: a primitive home alarm system.

Mendelsohn goes on to discuss “the fantasy of mindless, self-propelled helpers that relieve their masters of toil,” and it seems incredibly interesting to read it in the context of DARPA’s now even more aptly named POSYDON program and the permanent undersea hubs of the Office of Naval Research. Click over to The New York Review of Books for the whole thing.

6) If the oceanic is the new cosmic, then perhaps the terrestrial is the new oceanic.

The Independent reported last month that magnetically powered underground robot “moles”—effectively subterranean drones—could potentially be used to ferry objects around beneath the city. They are this generation’s pneumatic tubes.

The idea would be to use “a vast underground network of pipes in a bid to bypass the UK’s ever more congested roads.” The company’s name? What else but Mole Solutions, who refer to their own speculative infrastructure as a network of “freight pipelines.”

[Image: Courtesy of Mole Solutions].

Taking a page from the Office of Naval Research and DARPA, though, perhaps these subterranean robot constellations could be given “hubs” and terrestrial beacons with which to orient themselves; combine with the bizarre “self-burying robot” from 2013, and declare endless war on the surface of the world from below.

See more at the Independent.

7) Finally, in terms of this specific flurry of links, Denise Garcia looks at the future of robot warfare and the dangerous “secrecy of emerging weaponry” that can act without human intervention over at Foreign Affairs.

She suggests that “nuclear weapons and future lethal autonomous technologies will imperil humanity if governed poorly. They will doom civilization if they’re not governed at all.” On the other hand, as Daniel Mendelsohn points out, we have, in a sense, been dealing with the threat of a robot apocalypse since someone first came up with the myth of Hephaestus.

Garcia’s short essay covers a lot of ground previously seen in, for example, Peter Singer’s excellent book Wired For War; that’s not a reason to skip one for the other, of course, but to read both. See more at Foreign Affairs.

(Thanks to Peter Smith for suggesting we visit the antennas at Cutler).

Urban CAT Scan

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

The London-based ScanLab Projects, featured here many times before, have completed a new commission, this time from the British Postal Museum & Archive, to document the so-called “Mail Rail,” a network of underground tunnels that opened back in 1927.

As Subterranea Britannica explains, the tunnels were initially conceived as a system of pneumatic package-delivery tubes, an “atmospheric railway,” as it was rather fantastically described at the time, “by which a stationary steam engine would drive a large fan which could suck air out of an air tight tube and draw the vehicle towards it or blow air to push them away.”

That “vehicle” would have been a semi-autonomous wheeled cart bearing parcels for residents of Greater London.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

Alas, but unsurprisingly, this vision of an air-powered subterranean communication system for a vast metropolis of many millions of residents was replaced by a rail-based one, with narrow, packed-heavy cars running a system of tracks beneath the London streets.

Thus the Mail Rail system was born.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

While the story of the system itself is fascinating, it has also been told elsewhere.

The aforementioned Subterranea Britannica is a perfect place to start, but urban explorers have also gained entrance for narrative purposes of their own, including the long write-up over at Placehacking.

That link includes the incredible detail that, “on Halloween night 2010, ravers took over a massive derelict Post Office building in the city and threw an illegal party of epic proportions. When pictures from the party emerged, we were astonished to find that a few of them looked to be of a tiny rail system somehow accessed from the building.”

Surely, this should be the setting for a new novel: some huge and illegal party in an abandoned building at an otherwise undisclosed location in the city results in people breaking into or discovering an otherwise forgotten, literally underground network, alcohol-blurred photographs of which are later recognized as having unique urban importance.

Something is down there, the hungover viewers of these photographs quickly realize, something vague and hazily glimpsed in the unlit background of some selfies snapped at a rave.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

This would all be part of the general mysticism of infrastructure that I hinted at in an earlier post, the idea that the peripheral networks through which the city actually functions lie in wait, secretly connecting things from below or wrapping, Ouroborus-like, around us on the edges of things.

These systems are the Matrix, we might say in modern mythological terms, or the room where Zeus moves statues of us all around on chessboards: an invisible realm of tacit control and influence that we’ve come to know unimaginatively as nothing but infrastructure. But infrastructure is now the backstage pass, the esoteric world behind the curtain.

In any case, with this handful of party pictures in hand, a group of London explorers tried to infiltrate the system.

After hours of exploration, we finally found what we thought might be a freshly bricked up wall into the mythical Mail Rail the partygoers had inadvertently found… We went back to the car and discussed the possibility of chiselling the brick out. We decided that, given how soon it was after the party, the place was too hot to do that just now and we walked away, vowing to try again in a couple of months.

It took some time—but, eventually, it worked.

They found the tunnels.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

The complete write-up over at Placehacking is worth the read for the rest of that particular story.

But ScanLab now enter the frame as documentarians of a different sort, with a laser-assisted glimpse of this underground space down to millimetric details.

Their 3D point clouds afford a whole new form of representation, a kind of volumetric photography that cuts through streets and walls to reveal the full spatial nature of the places on display.

The incredible teaser video, pieced together from 223 different laser scanning sessions, reveals this with dramatic effect, featuring a virtual camera that smoothly passes beneath the street like a swimmer through the waves of the ocean.



As the British Postal Museum & Archive explains, the goal of getting ScanLab Projects down into their tunnels was “to form a digital model from which any number of future interactive, visual, animated and immersive experiences can be created.”

In other words, it was a museological project: the digital preservation of an urban underworld that few people—Placehacking‘s write-up aside—have actually seen.

For example, the Museum writes, the resulting laser-generated 3D point clouds might “enable a full 3D walkthrough of hidden parts of the network or an app that enables layers to be peeled away to see the original industrial detail beneath.”

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

Unpeeling the urban onion has never been so gorgeous as we leap through walls, peer upward through semi-transparent streets, and see signs hanging in mid-air from both sides simultaneously.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

Tunnels become weird ropey knots like smoke rings looped beneath the city as the facades of houses take on the appearance of old ghosts, remnants of another era gazing down at the flickering of other dimensions previously lost in the darkness below.

(Thanks again to the British Postal Museum & Archive for permission to post the images).

Infrastructure as Processional Space

[Image: A view of the Global Containers Terminal in Bayonne; Instagram by BLDGBLOG].

I just spent the bulk of the day out on a tour of the Global Containers Terminal in Bayonne, New Jersey, courtesy of the New York Infrastructure Observatory.

That’s a new branch of the institution previously known as the Bay Area Infrastructure Observatory, who hosted the MacroCity event out in San Francisco last May. They’re now leading occasional tours around NYC infrastructure (a link at the bottom of this post lets you join their mailing list).

[Image: A crane so large my iPhone basically couldn’t take a picture of it; Instagram by BLDGBLOG].

There were a little more than two dozen of us, a mix of grad students, writers, and people whose work in some way connected them to logistics, software, or product development—which, unsurprisingly, meant that everyone had only a few degrees of separation from the otherworldly automation on display there on the peninsula, this open-air theater of mobile cranes and mounted gantries whirring away in the precise loading and unloading of international container ships.

The clothes we were wearing, the cameras we were using to photograph the place, even the pens and paper many of us were using to take notes, all had probably entered the United States through this very terminal, a kind of return of the repressed as we brought those orphaned goods back to their place of disembarkation.

[Images: The bottom half of the same crane; Instagram by BLDGBLOG].

Along the way, we got to watch a room full of human controllers load, unload, and stack containers, with the interesting caveat that they—that is, humans—are only required when a crane comes within ten feet of an actual container. Beyond ten feet, automation sorts it out.

When the man I happened to be watching reached the critical point where his container effectively went on auto-pilot, not only did his monitor literally go blank, making it clear that he had seen enough and that the machines had now taken over, but he referred to this strong-armed virtual helper as “Auto Schwarzenegger.”

“Auto Schwarzenegger’s got it now,” he muttered, and the box then disappeared from the screen, making its invisible way to its proper location.

[Image: Waiting for the invisible hand of Auto Schwarzenegger; Instagram by BLDGBLOG].

Awesomely—in fact, almost unbelievably—when we entered the room, with this 90% automated landscape buzzing around us outside on hundreds of acres of mobile cargo in the wintry weather, they were listening to “Space Oddity” by David Bowie.

“Ground control to Major Tom…” the radio sang, as they toggled joysticks and waited for their monitors to light up with another container.

[Image: Out in the acreage; Instagram by BLDGBLOG].

The infinitely rearrangeable labyrinth of boxes outside was by no means easy to drive through, and we actually found ourselves temporarily walled in on the way out, just barely slipping between two containers that blocked off that part of the yard.

This was “Damage Land,” our guide from the port called it, referring to the place where all damaged containers came to be stored (and eventually sold).

[Image: One of thousands of stacked walls in the infinite labyrinth of the Global Containers Terminal; Instagram by BLDGBLOG].

One of the most consistently interesting aspects of the visit was learning what was and was not automated, including where human beings were required to stand during some of the processes.

For example, at one of several loading/unloading stops, the human driver of each truck was required to get out of the vehicle and stand on a pressure-sensitive pad in the ground. If nothing corresponding to the driver’s weight was felt by sensors on the pad, the otherwise fully automated machines toiling above would not snap into action.

This idea—that a human being standing on a pressure-sensitive pad could activate a sequence of semi-autonomous machines and processes in the landscape around them—surely has all sorts of weird implications for everything from future art or museum installations to something far darker, including the fully automated prison yards of tomorrow.

[Image: One of several semi-automated gate stations around the terminal; Instagram by BLDGBLOG].

This precise control of human circulation was also built into the landscape—or perhaps coded into the landscape—through the use of optical character recognition software (OCR) and radio-frequency ID chips. Tag-reading stations were located at various points throughout the yard, sending drivers either merrily on their exactly scripted way to a particular loading/unloading dock or sometimes actually barring that driver from entry. Indeed, bad behavior was punished, it was explained, by blocking a driver from the facility altogether for a certain amount of time, locking them out in a kind of reverse-quarantine.

Again, the implications here for other types of landscapes were both fascinating and somewhat ominous; but, more interestingly, as the trucks all dutifully lined-up to pass through the so-called “OCR building” on the far edge of the property, I was struck by how much it felt like watching a ceremonial gate at the outer edge of some partially sentient Forbidden City built specifically for machines.

In other words, we often read about the ceremonial use of urban space in an art historical or urban planning context, whether that means Renaissance depictions of religious processions or it means the ritualized passage of courtiers through imperial capitals in the far east. However, the processional cities of tomorrow are being built right now, and they’re not for humans—they’re both run and populated by algorithmic traffic control systems and self-operating machine constellations, in a thoroughly secular kind of ritual space driven by automated protocols more than by democratic legislation.

These—ports and warehouses, not churches and squares—are the processional spaces of tomorrow.

[Image: Procession of the True Cross (1496) by Gentile Bellini, via Wikimedia].

It’s also worth noting that these spaces are trickling into our everyday landscape from the periphery—which is exactly where we are now most likely to find them, simply referred to or even dismissed as mere infrastructure. However, this overly simple word masks the often startlingly unfamiliar forms of spatial and temporal organization on display. This actually seems so much to be the case that infrastructural tourism (such as today’s trip to Bayonne) is now emerging as a way for people to demystify and understand this peripheral realm of inhuman sequences and machines.

In any case, as the day progressed we learned a tiny bit about the “Terminal Operating System”—the actual software that keeps the whole place humming—and it was then pointed out, rather astonishingly, that the actual owner of this facility is the Ontario Teachers’ Pension Plan, an almost Thomas Pynchonian level of financial weirdness that added a whole new level of narrative intricacy to the day.

If this piques your interest in the Infrastructure Observatory, consider following them on Twitter: @InfraObserve and @NYInfraObserve. And to join the NY branch’s mailing list, try this link, which should also let you read their past newsletters.

[Image: The Container Guide; Instagram by BLDGBLOG].

Finally, the Infrastructure Observatory’s first publication is also now out, and we got to see the very first copy. The Container Guide by Tim Hwang and Craig Cannon should be available for purchase soon through their website; check back there for details (and read a bit more about the guide over at Edible Geography).

(Thanks to Spencer Wright for the driving and details, and to the Global Containers Terminal Bayonne for their time and hospitality!)

The Los Angeles County Department of Ambient Music vs. The Superfires of Tomorrow

You might have seen the news last month that two students from George Mason University developed a way to put out fires using sound.

“It happens so quickly you almost don’t believe it,” the Washington Post reported at the time. “Seth Robertson and Viet Tran ignite a fire, snap on their low-rumbling bass frequency generator and extinguish the flames in seconds.”

Indeed, it seems to work so well that “they think the concept could replace the toxic and messy chemicals involved in fire extinguishers.”



There are about a million interesting things here, but I was totally captivated by two points, in particular.

At one point in the video, co-inventor Viet Tran suggests that the device could be used in “swarm robotics” where it would be “attached to a drone” and then used to put out fires, whether wildfires or large buildings such as the recent skyscraper fire in Dubai. But consider how this is accomplished; from the Washington Post:

The basic concept, Tran said, is that sound waves are also “pressure waves, and they displace some of the oxygen” as they travel through the air. Oxygen, we all recall from high school chemistry, fuels fire. At a certain frequency, the sound waves “separate the oxygen [in the fire] from the fuel. The pressure wave is going back and forth, and that agitates where the air is. That specific space is enough to keep the fire from reigniting.”

While I’m aware that it’s a little strange this would be the first thing to cross my mind, surely this same effect could be weaponized, used to thin the air of oxygen and cause targeted asphyxiation wherever these robot swarms are sent next. After all, even something as simple as an over-loud bass line in your car can physically collapse your lungs: “One man was driving when he experienced a pneumothorax, characterised by breathlessness and chest pain,” the BBC reported back in 2004. “Doctors linked it to a 1,000 watt ‘bass box’ fitted to his car to boost the power of his stereo.”

In other words, motivated by a large enough defense budget—or simply by unadulterated misanthropy—you could thus suffocate whole cities with an oxygen-thinning swarm of robot sound systems in the sky. Those “Ride of the Valkyries”-blaring speakers mounted on Robert Duvall’s helicopter in Apocalypse Now might be playing something far more sinister over the battlefields of tomorrow.

However, the other, more ethically acceptable point of interest here is the possible landscape effect such an invention might have—that is, the possibility that this could be scaled-up to fight forest fires. There are a lot of problems with this, of course, including the fact that, even if you deplete a fire of oxygen, if the temperature remains high, it will simply flicker back to life and keep burning.

[Image: The Grateful Dead “wall of sound,” via audioheritage.org].

Nonetheless, there is something awesomely compelling in the idea that a wildfire burning in the woods somewhere in the mountains of Arizona might be put out by a wall of speakers playing ultra-low bass lines, rolling specially designed patterns of sound across the landscape, so quiet you almost can’t hear it.

A hum rumbles across the roots and branches of burning trees; there is a moment of violent trembling, as if an unseen burst of wind has blown through; and then the flames go out, leaving nothing but tendrils of smoke and this strange acoustic presence buzzing further into the fires up ahead.

Instead of emergency amphibious aircraft dropping lake water on remote conflagrations, we’d have mobile concerts of abstract sound—the world’s largest ambient raves—broadcast through National Parks and on the edges of desert cities.

Desperate, Los Angeles County hires a Department of Ambient Music to save the city from a wave of drought-augmented superfires; equipped with keyboards and effects pedals, wearing trucker hats and plaid, these heroes of the drone wander forth to face the inferno, extinguishing flames with lush carpets of anoxic sound.

(Spotted via New York Magazine).

Intermediary Geologies

[Image: From “H / AlCuTaAu” by Revital Cohen and Tuur Van Balen].

For a project called “H / AlCuTaAu”—named after the chemical elements that comprise its final form—artists Revital Cohen and Tuur Van Balen created what they call “an artificial mineral mined from technological artefacts.”

[Image: From “H / AlCuTaAu” by Revital Cohen and Tuur Van Balen].

As they explain in the accompanying, very brief artists’ statement, “Precious metals and stones were mined out of technological objects and transformed back into mineral form. The artificial ore was constructed out of gold (Au), copper (Cu), tantalum (Ta), aluminium (Al) and whetstone; all taken from tools, machinery and computers that were sourced from a recently bankrupt factory.”

Of course, our devices have been geology all along—refined aggregates of the Earth’s surface repurposed as commercial properties and given newfound electrical life—but it’s incredibly interesting to reverse-engineer from our phones, circuitboards, and hard drives entirely new mineral compounds.

[Image: From “H / AlCuTaAu” by Revital Cohen and Tuur Van Balen].

The project also—albeit in the guise of speculative art—very much implies the future of metal recycling, where our future “mines” are as likely to look like huge piles of discarded electronics as they are to be vast holes in the Earth.

In the same way that some of you might have tumbled rocks on your childhood desks for weeks at a time to scrape, abrade, and polish them down to a sparkling sheen, perhaps the mineworks of tomorrow will be benchtop recycling units extracting rare earth metals from obsolete consumer goods.

Armed with drills and ovens, we’ll just cook our own devices down to a primordial goo that can be selectively reshaped into objects.

[Images: From “H / AlCuTaAu” by Revital Cohen and Tuur Van Balen].

You might recall the discovery of so-called “plastiglomerates.” As Science reported last summer, a “new type of rock cobbled together from plastic, volcanic rock, beach sand, seashells, and corals has begun forming on the shores of Hawaii.” Part plastic, part rock, plastiglomerates are the new geology.

Put another way, this is terrestrial science in the age of the Anthropocene, discovering that even the rocks around us are, in a sense, artificial by-products of our own activities, industrial materials fossilized in an elaborate planetary masquerade that now passes for “nature.”

[Image: A “plastiglomerate”—part plastic, part geology—photographed by Patricia Corcoran, via Science].

Here, however, in Cohen’s and Van Balen’s work, these new, artistically fabricated conglomerates are more like alchemical distillations of everyday products: phones, radios, and computers speculatively cooked, simmered, bathed, acid-etched, and reworked into an emergent geology.

[Image: From “H / AlCuTaAu” by Revital Cohen and Tuur Van Balen].

It is a geology hidden all along in the objects we use, communicate with, and sell, a reduced mineralogy of electronics and machines that will someday form a new layer of the Earth.

(Via The New Aesthetic).

Ghosts of Home Geography

Noted scam artist and “Facebook fugitive” Paul Ceglia, hoping to escape from a recently imposed state of house-arrest, “sliced off his GPS ankle monitor and affixed it to a crudely built contraption in his rural New York residence,” Ars Technica reports.

The GPS sensor’s subsequent movements were then meant to maintain the illusion that he was still at home.

[Image: The GPS contraption; photo via Ars Technica].

According to the U.S. Marshals, “While conducting a security sweep of the home, the Task Force Officers observed, among other things, a hand-made contraption connected to the ceiling, from which Ceglia’s GPS bracelet was hanging. The purpose of the contraption appeared to be to keep the bracelet in motion using a stick connected to a motor that would rotate or swing the bracelet.”

The “contraption” appears to have been almost laughably basic, but it’s not hard to imagine something more ambitious, complete with tracks wandering from room to room to make it appear that someone is truly inside the residence.

In fact, the idea of faking your own location through attaching your GPS anklet to a Roomba, for example, and letting it wander around the house all day is perversely brilliant, like something from a 21st-century Alfred Hitchcock film. Of course, it wouldn’t take very long to deduce from the algorithmically perfect straight lines and zig-zag edge geometry of your Roomba’s movements that it is not, in fact, a real person walking around in there—or perhaps it would just look like you’ve taken up some bizarre new form of home exercise.

But a much more believable algorithm for faking the movements of a real, living resident could be part of some dark-market firmware update—new algorithms for the becoming-criminal of everyday machines.

[Image: Roomba-based LED art, via artselectronic].

A whole new class of products could be devised: part burglar deterrent, part anti-police-tracking device, they would meander and bump their way through a home’s interior, creating the geographic illusion that someone is moving around in there, passing room to room at certain moments.

It would be a GPS surrogate or implied resident, a locational ghost built from satellite signals and semi-autonomous robotic machines.

Touchscreen Landscapes

[Image: Screen grab via military.com].

This new, partly digital sand table interface developed for military planning would seem to have some pretty awesome uses in an architecture or landscape design studio.

Using 3D terrain data—in the military’s case, gathered in real-time from its planetary network of satellites—and a repurposed Kinect sensor, the system can adapt to hand-sculpted transformations in the sand by projecting new landforms and elevations down onto those newly molded forms.

You can thus carve a river in real-time through the center of the sandbox, and watch as projected water flows in—

[Image: Screen grabs via military.com].

—or you can simply squeeze sand together into new hills, and even make a volcanic crater.

[Image: Screen grabs via military.com].

The idea of projecting adaptive landscape imagery down onto a sandbox is brilliant; being able to interact with both the imagery and the sand itself by way of a Kinect sensor is simply awesome.

Imagine scaling this thing up to the size of a children’s playground, and you’d never see your kids again, lost in a hypnotic topography of Minecraft-like possibilities, or just donate some of these things to a landscape design department and lose several hours (weeks?) of your life, staring ahead in a state of geomorphic Zen at this touchscreen landscape of rolling hills and valleys, with its readymade rivers and a thousand on-demand plateaus.

The military, of course, uses it to track and kill people, filling their sandbox with projections of targeting coordinates and geometric representations of tanks.

[Image: Screen grabs via military.com].

But there’s no reason those coordinates couldn’t instead be the outlines of a chosen site for your proposed architecture project, or why those little clusters of trucks and hidden snipers couldn’t instead be models of new buildings or parks you’re hoping will be constructed.

Watch the original video for more.

Drive-By Archaeology

[Image: From a patent filed by MIT, courtesy U.S. Patent and Trademark Office].

The technical systems by which autonomous, self-driving vehicles will safely navigate city streets are usually presented as some combination of real-time scanning and detailed mnemonic map or virtual reference model created for that vehicle.

As Alexis Madrigal has written for The Atlantic, autonomous vehicles are, in essence, always driving within a virtual world—like Freudian machines, they are forever unable to venture outside a sphere of their own projections:

The key to Google’s success has been that these cars aren’t forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel. And these are not any old maps. They are not even the rich, road-logic-filled maps of consumer-grade Google Maps.
They’re probably best thought of as ultra-precise digitizations of the physical world, all the way down to tiny details like the position and height of every single curb. A normal digital map would show a road intersection; these maps would have a precision measured in inches.

The vehicle can thus respond to the city insofar as its own spatial expectations are never sufficiently contradicted by the evidence at hand: if the city, as scanned by the vehicle’s array of sensors and instruments, corresponds to the vehicle’s own internal expectations, then it can make the next rational decision (to turn a corner, stop at an intersection, wait for a passing train, etc.).

However, I was very interested to see that an MIT research team led by Byron Stanley had applied for a patent last autumn that would allow autonomous vehicles to guide themselves using ground-penetrating radar. It is the subterranean realm that they would thus be peering into, in addition to the plein air universe of curb heights and Yield signs, reading the underworld for its own peculiar landmarks.

[Image: From a patent filed by MIT, courtesy U.S. Patent and Trademark Office].

How would it work? Imagine, the MIT team suggests, that your autonomous vehicle is either in a landscape blanketed in snow. It is volumetrically deformed by all that extra mass and thus robbed not only of accurate points of measurement but also of any, if not all, computer-recognizable landmarks. Or, he adds, imagine that you have passed into a “GPS-denied area.”

In either case, you and your self-driving vehicle run the very real risk of falling off the map altogether, stuck in a machine that cannot find its way forward and, for all intents and purposes, can no longer even tell road from landscape.

[Image: From a patent filed by MIT, courtesy U.S. Patent and Trademark Office].

Stanley’s group has thus come up with the interesting suggestion that you could simply give autonomous vehicles the ability to see through the earth’s surface and scan for recognizable systems of pipework or other urban infrastructure down below. Your vehicle could then just follow those systems through the obscuring layers of rain, snow, or even tumbleweed to its eventual destination.

These would be cars attuned to the “subsurface region,” as the patent describes it, falling somewhere between urban archaeology and speleo-cartography.

In fact, with only the slightest tweaking of this technology and you could easily imagine a scenario in which your vehicle would more or less seek out and follow archaeological features in the ground. Picture something like an enormous basement in Rome or central London—or perhaps a strange variation on the city built entirely for autonomous vehicles at the University of Michigan. It is a vast expanse of concrete built—with great controversy—over an ancient site of incredible archaeological richness.

Climbing into a small autonomous vehicle, however, and avidly referring to the interactive menu presented on a touchscreen dashboard, you feel the vehicle begin to move, inching forward into the empty room. The trick is that it is navigating according to the remnant outlines of lost foundations and buried structures hidden in the ground around you, like a boat passing over shipwrecks hidden in the still but murky water.

The vehicle shifts and turns, hovers and circles back again, outlining where buildings once stood. It is acting out a kind of invisible architecture of the city, where its routes are not roads at all but the floor plans of old buildings and, rather than streets or parking lots, you circulate through and pause within forgotten rooms buried in the ground somewhere below.

In this “subsurface region” that only your vehicle’s radar eyes can see, your car finds navigational clarity, calmly poking along the secret forms of the city.

In any case, for more on the MIT patent, check out the U.S. Patent and Trademark Office.

(Via New Scientist).

Perspectival Objects

[Image: A perspectival representation of the “ideal city,” artist unknown].

There’s an interesting throwaway line in The Verge‘s write-up of yesterday’s Amazon phone launch, where blogger David Pierce remarks that the much-hyped public unveiling of Amazon’s so-called Fire Phone was “oddly focused on art history and perspective.”

As another post at the site points out, “Amazon CEO Jeff Bezos likened it to the move from flat artwork to artwork with geometric perspective which began in the 14th century.”

These are passing comments, sure, and, from Amazon’s side, it’s more marketing hype than anything like rigorous phenomenological theorizing. Yet there’s something strangely compelling in the idea that a seemingly gratuitous new consumer product—just another smartphone—might actually owe its allegiance to a different technical lineage, one less connected to the telecommunications industry and more from the world of architectural representation.

[Image: Jeff Bezos as perspectival historian. Courtesy of The Verge].

It would be a smartphone that takes us back to, say, Albrecht Dürer and his gridded drawing machines, making the Fire Phone a kind of perspectival object that deserves a place, however weird, in architectural history. Erwin Panofksy, we might say, would have used a Fire Phone—or at least he would have written a blog post about it.

In this context, the amazing image of billionaire Jeff Bezos standing on stage, giving a kind of off-the-cuff history of perspectival rendering surely belongs in future works of architectural history. Smiling and schoolteacher-like, Bezos gestures in front of an infinite grid ghosted-in over this seminal work of urban scenography, in one moment aiming to fit his product within a very particular, highly Western tradition of representing the built environment.

[Image: Courtesy of The Verge].

The launch of the Fire Phone did indeed give perspectival representation its due, showing how a three-dimensionally or relationally accurate perception of geometric space can change quite dramatically with only a small move of the viewer’s own head.

The phone’s “dynamic perspective,” engineered to correct this, seems a little rickety at best, but it is meant as way to account for otherwise inconsequential movements of the viewer through the landscape, whether it’s a crowded city street or the vast interiors of a hotel. To do so requires an almost comical amount of technical hand-waving. From The Verge:

The key to making dynamic perspective work is knowing exactly where the user’s head is at all times, in real time, many times per second, Bezos said. It’s something that the company has been working on for four years, and [the] best way to do it is with computer vision, he went on to note. The single, standard front-facing camera wasn’t sufficient because its field of view was too narrow—so Amazon included four additional cameras with a much wider field of view to continuously capture a user’s head. At the end of the day, it features four specialized front-facing cameras in addition to the standard front-facing camera found near the earpiece, two of which can be used in case the other cameras were covered; it uses the best two at any given time. Lastly, Amazon included infrared lights in each camera to allow the phone to work in the dark.

Five hundred years ago, we’d instead be reading about some fabulous new system of mirrors, lens, prisms, and strings, all tied back to or operated by way of complexly engineered works of geared furniture. Unfolding tables and adjustable chairs, with operable flaps and windows.

[Image: One of several perspectival objects—contraptions for producing spatially accurate drawings—by Albrecht Dürer].

These precursors of the Fire Phone, after seemingly endless acts of fine-tuning, would then, and only then, allow their users to see the scene before them with three-dimensional accuracy.

Now, replace those prisms and mirrors with multiple forward-facing cameras and infrared sensors, and market the resulting object to billions of potential users in front of gridded scenes of Western urbanism, and you’ve got the strange moment that happened yesterday, where a smartphone aimed to collapse all of Western art history into a single technical artifact, a perspectival object many of us will soon be carrying in our bags and pockets.

[Image: Another “ideal city,” artist unknown].

More interestingly, though, with its odd focus “on art history and perspective,” Amazon’s event raises the question of how electronic mediation of the built environment might be affecting how our cities are designed in the first place—how we see buildings, streets, and cities through the dynamic lens of automatic perspective correction and other visual algorithms.

Put another way, is there a type of architecture—Classical, Romanesque—particularly well-suited for perspectival objects like the Fire Phone, and, conversely, are there types of built space that throw these devices off altogether? Further, could artificial environments that exceed the rendering capacity of smartphones and other digital cameras be deliberately designed—and, if so, what would they “look like” to those sensors and objects?

Recall that, at one point in his demonstration, Bezos explained how Amazon’s new interface “uses different layers to hide and show information on the map like Yelp reviews,” effectively tagging works of architecture with digital metadata in a kind of Augmented Reality Lite.

But what this suggests, together with Bezos’s use of “ideal city” imagery, is that smartphone urbanism will have its own peculiar stylistic needs. Perhaps, if visually defined, that will mean that phones will require cities to be gridded and legible, with clear spatial differentiation between buildings and objects in order to function most accurately—in order to line up with the clouds of virtual tags we will soon be placing all over the structures around us. Perhaps, if more GPS-defined, that will mean overlapping buildings and spaces are just fine, but they nonetheless must allow unblocked access to satellite signals above so that things don’t get confused down at street level—a kind of celestial perspectivism where, from the phone’s point of view, the roof is the new facade, the actual “front” of the building through which vital navigational signals must travel.

Either way, the possibility that there is a particular type of space, or a particular type of urbanism, most suited to the perspectival needs of new smartphones is totally fascinating. Perhaps in retrospect, this photograph of Jeff Bezos, grinning at the world in front of a gigantic image of Western perspective, will become a canonical architectural image of where digital objects and urban design intersect.

An Occult History of the Television Set


The origin of the television set was heavily shrouded in both spiritualism and the occult, Stefan Andriopoulos writes in his new book Ghostly Apparitions. In fact, as its very name implies, the television was first conceived as a technical device for seeing at a distance: like the telephone (speaking at a distance) and telescope (viewing at a distance), the television was intended as an almost magical box through which we could watch distant events unfold, a kind of technological crystal ball.

Andriopoulos’s book puts the TV into a long line of other “optical media” that go back at least as far as popular Renaissance experiments involving technologically-induced illusions, such as concave mirrors, magic lanterns, disorienting walls of smoke, and other “ghostly apparitions” and “phantasmagoric projections” created by specialty devices. These were conjuring tricks, sure—mere public spectacles, so to speak—but successfully achieving them required sophisticated understandings of basic physical factors such as light, shadow, and acoustics, making an audience see—and, most importantly, believe in—the illusion.

A Magic Lantern for Watching Events at a Distance

What’s central to Andriopoulos’s argument is that these devices incorporated earlier experimental instruments devised specifically for pursuing supernatural research—for visualizing the invisible and showing the subtle forces at work in everyday life. In his words, these were “devices developed in occult research”—including explicitly “televisionlike devices”—that had been invented in the name of spiritualism toward the end of the 19th century and that, only a decade or two later, “played a constitutive role in the emergence of radio and television.”

[Image: From Etienne-Gaspard Robertson’s 1834 study of technical phantasmagoria, via Ghostly Apparitions].

In Andriopoulos’s words, this was simply part of “the reciprocal interaction between occultism and the natural sciences that characterized the cultural construction of new technological media in the late nineteenth century,” a “two-directional exchange between occultism and technology.” New forms of broadcast technology and belief in the occult? No big deal.

So, while the television itself—the object you and I most likely know as the utterly mundane fixture of family distraction sitting centrally ensconced in a nearby living room—might not be a supernatural mechanism, it nonetheless descends from a strange and convoluted line of esoteric experimentation, including early attempts at controlling electromagnetic transmissions, directing radio waves, and even experiencing various forms of so-called “remote viewing.”

The idea of a medium takes on a double meaning here, Andriopoulos explains, as the word refers both to the media—in the sense of a professional world of publishing and transmission—and to the medium, in the sense of a specific, vaguely shamanic person who acts as a psychic or seer. The medium thus acts as an intermediary between humans and the supernatural world in a very literal sense.

Indeed, in Andriopoulos’s version of television’s origin story, the notion of spiritual clairvoyance was very much part of the overall intention of the device.

Clairvoyance—a word that literally means clear vision, yet that has now come to refer almost exclusively to a supernatural ability to see things at a distance or before events even happen—offered an easy metaphor for this new mechanism.

Television promised clairvoyance in the sense that a TV could allow seeing without interference or noise. It would give viewers a way to tune into and clearly see a broadcast’s invisible signals—with the implication that an esoteric remote-viewing apparatus with forgotten supernatural intentions is now mounted and enshrined in nearly everyone’s home.

[Image: A “moving face” transmitted by John Logie Baird at a public demonstration of TV in 1926 (photo via the BBC)].

I’ll leave it to curious readers to look for Andriopoulos’s book itself—with the caveat that it is quite heavy on German idealism and rather light on real tech history—but it is worth mentioning the fact that at least one other technical aspect of the 20th-century television also followed a very bizarre historical trajectory.

Part Tomb, Part Church, Part Planetarium

The cathode ray—a vacuum tube technology found in early television sets—took on an unexpected and extraordinary use in the work of gonzo Norwegian inventor Kristian Birkeland. Birkeland used cathode rays in his attempt to build a doomed scale model of the solar system.

I genuinely love this story and I have written about it elsewhere, including both here on BLDGBLOG and in The BLDGBLOG Book, but it’s well worth retelling.

In a nutshell, Birkeland was the first scientist to correctly hypothesize the origins of the Northern Lights, rightly deducing from his own research into electromagnetic phenomena that the aurora borealis was actually caused by interactions between charged particles constantly streaming toward earth from the sun and the earth’s own protective magnetic field. This produced the extraordinary displays of light Birkeland had seen in the planet’s far north.

However, as Birkeland fell deeper into an eventually fatal addiction to extreme levels of caffeine and a slow-acting hypnotic drug called Veronal, he also—awesomely—became fixated on the weirdly impossible goal of precisely modeling the Northern Lights in miniature. He sought to build a kind of Bay Model of the Northern Lights.

[Image: Kristian Birkeland stares deeply into his universal simulator (via)].

As author Lucy Jago tells Birkeland’s amazing story in her book The Northern Lights, he was intent on producing a kind of astronomical television set: a “televisionlike device,” in Andriopoulos’s words, whose inner technical workings would not just broadcast actions and characters seen elsewhere, but would actually model the electromagnetic secrets of the universe.

As Jago describes his project, Birkeland “drew up plans for a new machine unlike anything that had been made before.” It resembled “a spacious aquarium,” she writes, a shining box that would act as “a window into space.”

The box would be pumped out to create a vacuum and he would use larger globes and a more powerful cathode to produce charged particles. With so much more room he would be able to see effects, obscured in the smaller tubes, that could take his Northern Lights theory one step further–into a complete cosmogony, a theory of the origins of the universe.

It was a multifaceted and extraordinary undertaking. With it, Jago points out, “Birkeland was able to simulate Saturn’s rings, comet tails, and the Zodiacal Light. He even experimented with space propulsion using cathode rays. Sophisticated photographs were taken of each simulation, to be included in the next volume of Birkeland’s great work, which would discern the electromagnetic nature of the universe and his theories about the formation of the solar system.”

However, this “spacious aquarium” was by no means the end of Birkeland’s manic (tele)vision.

[Image: From Birkeland’s The Norwegian Aurora Polaris Expedition 1902-1903, Vol. 1: On the Cause of Magnetic Storms and The Origin of Terrestrial Magnetism (via)].

His ultimate goal—devised while near-death in a hotel room in Egypt—was to construct a vacuum chamber partially excavated into the solid rock of a mountain peak, an insane mixture of tomb, church, and planetarium.

The resulting cathedral-like space—think of it as a three-dimensionally immersive, landscape-scale television set carved directly into bedrock—would thus be an artificial cavern inside of which flickering electric mirages of stars, planets, comets, and aurorae would spiral and glow for a hypnotized audience.

Birkeland wrote about this astonishing plan in a letter to a friend. He was clearly excited about what he called a “great idea I have had.” It would be—and the emphasis is all Birkeland’s—”a museum for the discovery of the Earth’s magnetism, magnetic storms, the nature of sunspots, of planets—their nature and creation.”

His excitement was justified, and the ensuing description is worth quoting at length; you can almost feel the caffeine. “On a little hill,” he scribbled, presumably on his Egyptian hotel’s own stationery, perhaps even featuring a little image of the pyramids embossed in its letterhead, reminding him of the ambitions of long-dead pharaohs, “I will build a dome of granite, the walls will be a meter thick, the floor will be formed of the mountain itself and the top of the dome, fourteen meters in diameter, will be a gilded copper sphere. Can you guess what the dome will cover? When I’m boasting I say to my friends here ‘next to God, I have the greatest vacuum chamber in the world.’ I will make a vacuum chamber of 1,000 cubic metres and, every Sunday, people will have the opportunity to see a ring of Saturn ten metres in diameter, sunspots like no one else can do better, Zodiacal Light as evocative as the natural one and, finally, auroras… four meters in diametre. The same sphere will serve as Saturn, the sun, and Earth, and will be driven round by a motor.”

Every Sunday, as if attending Mass, congregants of this artificial solar system would thus hike up some remote mountain trail, heading deep into the cavernous and immersive television of Birkeland’s own astronomy, hypnotized by the explosive whirls of its peculiar, peacock-like displays of electromagnetism, shimmering cathedrals of artificially controlled planetary light.

[Image: Cropping in on the pic seen above (via)].

Seen in the context of the occult mechanisms, psychic TVs, and clairvoyant media technologies of Stefan Andriopoulos’s book, Birkeland’s story reveals just one particularly monumental take on the other-worldly possibilities implied by televisual media, bypassing the supernatural altogether to focus on something altogether more extreme: a direct visual engagement with nature itself, in all its blazing detail.

Of course, Birkeland’s cathode ray model of the solar system might not have conjured ghosts or visualized the spiritual energies that Andriopoulous explores in his book, but it did try to bring the heavens down to earth in the form of a 1,000 cubic meter television set partially hewn from mountain granite.

It was the most awesome TV ever attempted, a doomed and never-realized invention that nonetheless puts all of today’s visual media to shame.

(An earlier version of this post previously appeared on Gizmodo).