Cereal Bags of the Stratosphere

[Image: One of Google’ Loon balloons; screen grab from this video].

“The lab is 250 feet wide, 200 feet deep, and 70 feet tall. It’s a massive space where Google’s scientists can simulate the negative-60 degrees Celsius temperature of the stratosphere.” Alexis Madrigal on Google’s Project Loon balloons.

The future of the internet is cereal bag technology in the sky.

Electronic Plantlife

[Image: A rose-circuit, courtesy Linköping University].

In a newly published paper called “Electronic plants,” researchers from Linköping University in Sweden describe the process by which they were able to “manufacture” what they call “analog and digital organic electronic circuits and devices” inside living plants.

The plants not only conducted electrical signals, but, as Science News points, the team also “induced roses leaves to light up and change color.”

Indeed, in their way of thinking, plants have been electronic gadgets all along: “The roots, stems, leaves, and vascular circuitry of higher plants are responsible for conveying the chemical signals that regulate growth and functions. From a certain perspective, these features are analogous to the contacts, interconnections, devices, and wires of discrete and integrated electronic circuits.”

[Image: Bioluminescent foxfire mushrooms (used purely for illustrative effect), via Wikipedia].

Here’s the process in a nutshell:

The idea of putting electronics directly into trees for the paper industry originated in the 1990s while the LOE team at Linköping University was researching printed electronics on paper. Early efforts to introduce electronics in plants were attempted by Assistant Professor Daniel Simon, leader of the LOE’s bioelectronics team, and Professor Xavier Crispin, leader of the LOE’s solid-state device team, but a lack of funding from skeptical investors halted these projects.
Thanks to independent research money from the Knut and Alice Wallenberg Foundation in 2012, Professor Berggren was able to assemble a team of researchers to reboot the project. The team tried many attempts of introducing conductive polymers through rose stems. Only one polymer, called PEDOT-S, synthesized by Dr. Roger Gabrielsson, successfully assembled itself inside the xylem channels as conducting wires, while still allowing the transport of water and nutrients. Dr. Eleni Stavrinidou used the material to create long (10 cm) wires in the xylem channels of the rose. By combining the wires with the electrolyte that surrounds these channels she was able to create an electrochemical transistor, a transistor that converts ionic signals to electronic output. Using the xylem transistors she also demonstrated digital logic gate function.

Headily enough, using plantlife as a logic gate also implies a future computational use of vegetation: living supercomputers producing their own circuits inside dual-use stems.

Previously, we have looked at the use of electricity to stimulate plants into producing certain chemicals, how the action of plant roots growing through soil could be tapped as a future source of power, and how soil bacteria could be wired up into huge, living battery fields—in fact, we also looked at a tongue-in-cheek design project for “growing electrical circuitry inside the trunks of living trees“—but this actually turns vegetation into a form of living circuitry.

While Archigram’s “Logplug” project is an obvious reference point here within the world of architectural design, it seems more interesting to consider instead the future landscape design implications of technological advances such as this—how “electronic plants” might affect everything from forestry to home gardening, energy production and distribution infrastructure to a city’s lighting grid.

[Image: The “Logplug” by Archigram, from Archigram].

We looked at this latter possibility several few years ago, in fact, in a post from 2009 called “The Bioluminescent Metropolis,” where the first comment now seems both prescient and somewhat sad given later developments.

But the possibilities here go beyond mere bioluminescence, into someday fully functioning electronic vegetation.

Plants could be used as interactive displays—recall the roses “induced… to light up and change color”—as well as given larger conductive roles in a region’s electrical grid. Imagine storing excess electricity from a solar power plant inside shining rose gardens, or the ability to bypass fallen power lines after a thunderstorm by re-routing a town’s electrical supply through the landscape itself, living corridors wired from within by self-assembling circuits and transistors.

And, of course, that’s all in addition to the possibility of cultivating plants specifically for their use as manufacturing systems for organic electronics—for example, cracking them open not to reveal nuts, seeds, or other consumable protein, but the flexible circuits of living computer networks. BioRAM.

There are obvious reasons to hesitate before realizing such a vision—that is, before charging headlong into a future world where forests are treated merely as back-up lighting plans for overcrowded cities and plants of every kind are seen as nothing but wildlife-disrupting sources of light cultivated for the throwaway value of human aesthetic pleasure.

Nonetheless, thinking through the design possibilities in addition to the ethical risks not only now seems very necessary, but might also lead someplace truly extraordinary—or someplace otherworldly, we might say with no need for justification.

For now, check out the original research paper over at Science Advances.

Extract

[Image: By Spiros Hadjidjanos, via Contemporary Art Daily].

Artist Spiros Hadjidjanos has been using an interesting technique in his recent work, where he scans old photographs, turns their color or shading intensity into depth information, and then 3D-prints objects extracted from this. The effect is like pulling objects out of wormholes.

[Image: By Spiros Hadjidjanos, via Contemporary Art Daily].

His experiments appear to have begun with a project focused specifically on Karl Blossfeldt’s classic book Urformen der Kunst; there, Blossfeldt published beautifully realized botanical photographs that fell somewhere between scientific taxonomy and human portraiture.

[Image: By Spiros Hadjidjanos, via Stylemag].

As Hi-Fructose explained earlier this summer, Hadjidjanos’s approach was to scan Blossfeldt’s images, then, “using complex information algorithms to add depth, [they] were printed as objects composed of hundreds of sharp needle-like aluminum-nylon points. Despite their space-age methods, the plants appear fossilized. Each node and vein is perfectly preserved for posterity.”

[Image: Via Spiros Hadjidjanos’s Instagram feed].

The results are pretty awesome—but I was especially drawn to this when I saw, on Hadjidjanos’s Instagram feed, that he had started to apply this to architectural motifs.

2D architectural images—scanned and translated into operable depth information—can then be realized as blurred and imperfect 3D objects, spectral secondary reproductions that are almost like digitally compressed, 3D versions of the original photograph.

[Image: Via Spiros Hadjidjanos’s Instagram feed].

It’s a deliberately lo-fi, representationally imperfect way of bringing architectural fragments back to life, as if unpeeling partial buildings from the crumbling pages of a library, a digital wizardry of extracting space from surface.

[Image: Via Spiros Hadjidjanos’s Instagram feed].

There are many, many interesting things to discuss here—including three-dimensional data loss, object translations, and emerging aesthetics unique to scanning technology—but what particularly stands out to me is the implication that this is, in effect, photography pursued by other means.

In other words, through a combination of digital scanning and 3D-printing, these strange metallized nylon hybrids, depicting plinths, entablatures, finials, and other spatial details, are just a kind of depth photography, object-photographs that, when hung on a wall, become functionally indistinct from architecture.

Ghosting

[Image: From Pierre Huyghe, “Les grandes ensembles” (2001)].

A short news items in New Scientist this week describes the work of University of Michigan engineers who have developed a way to, in effect, synchronize architectural structures at a distance. They refer to this as “ghosting”:

When someone turns the lights on in one kitchen, they automatically switch on in the connected house. Sounds are picked up and relayed, too. Engineers at the University of Michigan successfully linked an apartment in Michigan with one in Maryland. The work was presented at the IoT-App conference in Seoul, South Korea, last week.

I haven’t found any more details about the project—including why, exactly, one would want to do this, other than perhaps to create some strange new electrical variation on “The Picture of Dorian Gray,” where a secret reference-apartment is kept burning away somewhere in the American night—but no doubt more info will come to light soon.*

*Update: Such as right now: here is the original paper. There, we read the following:

Ghosting synchronizes audio and lighting between two homes on a room-by-room basis. Microphones in each room transmit audio to the corresponding room in the other home, unifying the ambient sound domains of the two homes. For example, a user cooking in their kitchen transmits sounds out of speakers in the other user’s own kitchen. The lighting context in corresponding rooms is also synchronized. A light toggled in one house toggles the lights in the other house in real time. We claim that this system allows for casual interactions that feel natural and intimate because they share context and require less social effort than a teleconference or phone call.

Thanks to Nick Arvin, both for finding the paper and for highlighting that particular quotation.

Liquid Quarries and Reefs On Demand

[Image: Micromotors at work, via UCSD/ScienceDaily].

Tiny machines that can extract carbon dioxide from water might someday help deacidify the oceans, according to a press release put out last week by UCSD.

Described as “micromotors,” the devices “are essentially six-micrometer-long tubes that help rapidly convert carbon dioxide into calcium carbonate, a solid mineral found in eggshells, the shells of various marine organisms, calcium supplements and cement.”

While these are still just prototypes, and are far from ready actually to use anywhere in the wild, they appear to have proven remarkably effective in the lab:

In their experiments, nanoengineers demonstrated that the micromotors rapidly decarbonated water solutions that were saturated with carbon dioxide. Within five minutes, the micromotors removed 90 percent of the carbon dioxide from a solution of deionized water. The micromotors were just as effective in a sea water solution and removed 88 percent of the carbon dioxide in the same timeframe.

The implications of this for marine life are obviously pretty huge—after all, overly acidic waters mean that shells are difficult, if not impossible, to form, so these devices could have an enormously positive effect on sea life—but these devices could also be hugely useful in the creation of marine limestone.

As UCSD scientists explain, the micromotors would “rapidly zoom around in water, remove carbon dioxide and convert it into a usable solid form.” A cloud of these machines could thus essentially precipitate the basic ingredients of future rocks from open water.

[Image: A Maltese limestone quarry, via Wikipedia].

At least two possibilities seem worth mentioning.

One is the creation of a kind of liquid quarry out of which solid rock could be extracted—a square mile or two of seawater where a slurry of calcium carbonate would snow down continuously, 24 hours a day, from the endless churning of invisible machines. Screen off a region of the coast somewhere, so that no fish can be harmed, then trawl those hazy waters for the raw materials of future rock, later to be cut, stacked, and sold for dry-land construction.

The other would be the possibility of, in effect, the large-scale depositional printing of new artificial reefs. Set loose these micromotors in what would appear to be a large, building-sized teabag that you slowly drag through the ocean waters, and new underwater landforms slowly accrete in its week. Given weeks, months, years, and you’ve effectively 3D-printed a series of new reefs, perfect for coastal protection, a new marine sanctuary, or even just a tourist site.

In any case, read more about the actual process over at UCSD or ScienceDaily.

Subterranean Lightning Brigade

[Image: “Riggers install a lightning rod” atop the Empire State Building “in preparation for an investigation into lightning by scientists of the General Electric Company” (1947), via the Library of Congress].

This is hardly news, but I wanted to post about the use of artificial lightning as a navigational aid for subterranean military operations.

This was reported at the time as a project whose goal was “to let troops navigate about inside huge underground enemy tunnel complexes by measuring energy pulses given off by lightning bolts,” where those lightning bolts could potentially be generated on-demand by aboveground tactical strike teams.

Such a system would replace the use of GPS—whose signals cannot penetrate into deep subterranean spaces—and it would operate by way of sferics, or radio atmospheric signals generated by electrical activity in the sky.

The proposed underground navigational system—known as “Sferics-Based Underground Geolocation” or S-BUG—would be capable of picking up these signals even from “hundreds of miles away. Receiving signals from lighting strikes in multiple directions, along with minimal information from a surface base station also at a distance, could allow operators to accurately pinpoint their position.” They could thus maneuver underground, even in hundreds—thousands—of feet below the earth’s surface in enemy caves or bunkers.

Hundreds of miles is a very wide range, of course—but what if there is no natural lightning in the area?

Enter artificial military storm generators, or the charge of the lightning brigade.

Back in 2009, DARPA also put out of a request for proposals as part of something called Project Nimbus. NIMBUS is “a fundamental science program focused on obtaining a comprehensive understanding of the lightning process.” However, it included a specific interest in developing machines for “triggering lightning”:

Experimental Set-up for Triggering Lightning: Bidders should fully describe how they would attempt to trigger lightning and list all potential pieces of equipment necessary to trigger lightning, as well as the equipment necessary to measure and characterize the processes governing lightning initiation, propagation, and attachment.

While it’s easy enough to wax conspiratorial here about future lightning weapons or militarized storm cells—after all, DARPA themselves write that they want to understand “how [lightning] ties into the global charging circuit,” as if “the global charging circuit” is something that could be instrumentalized or controlled—I actually find it more interesting to speculate that generating lightning would be not for offensive purposes at all, but for guiding underground navigation.

[Image: Lightning storm over Boston; via Wikimedia/NOAA].

Something akin to a strobe light begins pulsing atop a small camp of unmarked military vehicles parked far outside a desert city known for its insurgent activities. These flashes gradual lengthen, both temporally and physically, lasting longer and stretching upward into the sky; the clouds above are beginning to thicken, grumbling with quiet rolls of thunder.

Then the lightning strikes begin—but they’re unlike any natural lightning you’ve ever seen. They’re more like pops of static electricity—a pulsing halo or toroidal crown of light centered on the caravan of trucks below—and they seem carefully timed.

To defensive spotters watching them through binoculars in the city, it’s obvious what this means: there must be a team of soldiers underground somewhere, using artificial sferics to navigate. They must be pushing forward relentlessly through the sewers and smuggling tunnels, crawling around the roots of buildings and maneuvering through the mazework of infrastructure that constitutes the city’s underside, locating themselves by way of these rhythmic flashes of false lightning.

Of course, this equipment would eventually be de-militarized and handed down to the civilian sector, in which case you can imagine four friends leaving REI on a Friday afternoon after work with an artificial lightning generator split between them; no larger than a camp stove, it would eventually be set up with their other weekend caving equipment, used to help navigate through deep, stream-slick caves an hour and a half outside town, beneath tall mountains where GPS can’t always be trusted.

Or, perhaps fifty years from now, salvage teams are sent deep into the flooded cities of the eastern seaboard to look for and retrieve valuable industrial equipment. They install an artificial lightning unit on the salt-bleached roof of a crumbling Brooklyn warehouse before heading off in a small armada of marsh boats, looking for entrances to old maintenance facilities whose basement storage rooms might have survived rapid sea-level rise.

Disappearing down into these lost rooms—like explorers of Egyptian tombs—they are guided by bolts of artificial lightning that spark upward above the ruins, reflected by tides.

[Image: Lightning via NOAA].

Or—why not?—perhaps we’ll send a DARPA-funded lightning unit to one of the moons of Jupiter and let it flash and strobe there for as long as it needs. Called Project Miller-Urey, its aim is to catalyze life from the prebiotic, primordial soup of chemistry swirling around there in the Cthulhoid shadow of eternal ice mountains.

Millions and millions of years hence, proto-intelligent lifeforms emerge, never once guessing that they are, in fact, indirect descendants of artificial lightning technology. Their spark is not divine but military, the electrical equipment that sparked their ancestral line long since fallen into oblivion.

In any case, keep your eyes—and cameras—posted for artificial lightning strikes coming to a future military theater near you…

Joyful Rendezvous Upon Pure Ice and Snow

[Image: Snow-making equipment via Wikipedia].

The 2022 Winter Olympics in Beijing are something of a moonshot moment for artificial snow-making technology: the winter games will be held “in a place with no snow.” That’s right: “the 2022 Olympics will rely entirely on artificial snow.”

As a report released by the International Olympic Committee admits, “The Zhangjiakou and Yanqing Zones have minimal annual snowfall and for the Games would rely completely on artificial snow. There would be no opportunity to haul snow from higher elevations for contingency maintenance to the racecourses so a contingency plan would rely on stockpiled man-made snow.”

This gives new meaning to the word snowbank: a stock-piled reserve of artificial landscape effects, an archive of on-demand, readymade topography.

Beijing’s slogan for their Olympic bid? “Joyful Rendezvous upon Pure Ice and Snow.”

[Image: Snow-making equipment via Wikipedia].

Purely in terms of energy infrastructure and freshwater demand—most of the water will be pumped in from existing reservoirs—the 2022 winter games will seemingly be unparalleled in terms of their sheer unsustainability. Even the IOC sees this; from their report:

The Commission considers Beijing 2022 has underestimated the amount of water that would be needed for snowmaking for the Games but believes adequate water for Games needs could be supplied.

In addition, the Commission is of the opinion that Beijing 2022 has overestimated the ability to recapture water used for snowmaking. These factors should be carefully considered in determining the legacy plans for snow venues.

Knowing all this, then, why not be truly radical—why not host the winter games in Florida’s forthcoming “snowball fight arena,” part of “a $309 million resort near Kissimmee that would include 14-story ski and snowboard mountain, an indoor/outdoor skateboard park and a snowball fight arena”?

Why not host them in Manaus?

Interestingly, the IOC also raises the question of the Games’ aesthetics, warning that the venues might not really look like winter.

“Due to the lack of natural snow,” we read, “the ‘look’ of the venue may not be aesthetically pleasing either side of the ski run. However, assuming sufficient snow has been made or stockpiled and that the temperature remains cold, this should not impact the sport during the Games.”

Elsewhere: “There could be no snow outside of the racecourse, especially in Yanqing, impacting the visual perception of the snow sports setting.” This basically means that there will be lots of bare ground, rocks, and gravel lining the virginal white strips of these future ski runs.

[Image: Ski jumping in summer at Chicago’s Soldier Field (1954); via Pruned].

Several years ago, Pruned satirically offered Chicago as a venue for the world’s “first wholly urban Winter Olympics.” With admirable detail, he went into many of the specifics for how Chicago might pull it off, but he also points out the potential aesthetic disorientation presented by seeing winter sports in a non-idyllic landscape setting.

“Chicago’s gritty landscape shouldn’t be much of a handicap,” he suggests. Chicago might not “embody a certain sort of nature—rustic mountains, pastoral evergreen forests, a lonely goatherd, etc.,” but the embedded landscape technology of the Winter Games should have left behind that antiquated Romanticism long ago.

As Pruned asks, “have the more traditional Winter Olympic sites not been over the years transformed into high-tech event landscapes, carefully managed and augmented with artificial snow and heavy plows that sculpt the slopes to a pre-programmed set of topographical parameters?”

Seen this way, Beijing’s snowless winter games are just an unsustainable historical trajectory taken to its most obvious next step.

[Image: Making snow for It’s A Wonderful Life, via vintage everyday].

In any case, the 2022 Winter Olympics are shaping up to be something like an Apollo Program for fake snow, an industry that, over the next seven years, seems poised to experience a surge of innovation as the unveiling of this most artificial of Olympic landscapes approaches.

This Is Only A Test

[Image: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].

Photographer Daniel Stier has a new book out, and an accompanying exhibition on display at the kulturreich gallery, called Ways of Knowing.

Skier’s photos depict human subjects immersed in, or even at the mercy of, spatial instrumentation: strange devices conducting experiments that function at the scale of architecture but whose purpose remains unidentified.

[Image: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].

In Stier’s words, the overall series is “a personal project exploring the real world of scientific research. Not the stainless steel surfaces bathed in purple light, but real people in their basements working on selfbuilt contraptions. All shot in state of the art research institutions across Europe and the US, showing experiments with human subjects. Portrayed are the people conducting the experiments—students, doctorands and professors.”

In recent interviews discussing the book, Stier has pointed out what he calls “similarities between artistic and scientific work,” with an emphasis on the craft that goes into designing and executing these devices.

However, there is also a performative or aesthetic aspect to many of these that hints at resonances beyond the world of applied science—a person staring into multicolored constellations painted on the inside of an inverted bowl, for example.

[Image: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].

Ostensibly an ophthalmic device of some kind, it could just as easily be an amateur’s attempt at OpArt.

In a sense, these are not just one-off scientific experiments but spatial prototypes: rigorous attempts at building and establishing a very particular kind of environment—a carefully calibrated and tuned zone of parameters, forces, and influences—then exposing people to those worlds as a means of testing for their effects.

[Image: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].

In any case, here are a few more images to pique your curiosity, but many, many more photos are available in Stier’s book, which just began shipping this month, and, of course, over at Stier’s website.

[Images: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].

(Originally spotted via New Scientist).

Driving on Mars and the Theater of Machines

[Image: Self-portrait on Mars; via NASA].

Science has published a short profile of a woman named Vandi Verma. She is “one of the few people in the world who is qualified to drive a vehicle on Mars.”

Vera has driven a series of remote vehicles on another planet over the years, including, most recently, the Curiosity rover.

[Image: Another self-portrait on Mars; via NASA].

Driving it involves a strange sequence of simulations, projections, and virtual maps that are eventually beamed out from planet to planet, the robot at the other end acting like a kind of wheeled marionette as it then spins forward along its new route. Here is a long description of the process from Science:

Each day, before the rover shuts down for the frigid martian night, it calls home, Verma says. Besides relaying scientific data and images it gathered during the day, it sends its precise coordinates. They are downloaded into simulation software Verma helped write. The software helps drivers plan the rover’s route for the next day, simulating tricky maneuvers. Operators may even perform a dry run with a duplicate rover on a sandy replica of the planet’s surface in JPL’s Mars Yard. Then the full day’s itinerary is beamed to the rover so that it can set off purposefully each dawn.

What’s interesting here is not just the notion of an interplanetary driver’s license—a qualification that allows one to control wheeled machines on other planets—but the fact that there is still such a clear human focus at the center of the control process.

The fact that Science‘s profile of Verma begins with her driving agricultural equipment on her family farm in India, an experience that quite rapidly scaled up to the point of guiding rovers across the surface of another world entirely, only reinforces the sense of surprise here—that farm equipment in India and NASA’s Mars rover program bear technical similarities.

They are, in a sense, interplanetary cousins, simultaneously conjoined and air-gapped across two worlds..

[Image: A glimpse of the dreaming; photo by Alexis Madrigal, courtesy of The Atlantic].

Compare this to the complex process of programming and manufacturing a driverless vehicle. In an interesting piece published last summer, Alexis Madrigal explained that Google’s self-driving cars operate inside a Borgesian 1:1 map of the physical world, a “virtual track” coextensive with the landscape you and I stand upon and inhabit.

“Google has created a virtual world out of the streets their engineers have driven,” Madrigal writes. And, like the Mars rover program we just read about, “They pre-load the data for the route into the car’s memory before it sets off, so that as it drives, the software knows what to expect.”

The software knows what to expect because the vehicle, in a sense, is not really driving on the streets outside Google’s Mountain View campus; it is driving in a seamlessly parallel simulation of those streets, never leaving the world of the map so precisely programmed into its software.

Like Christopher Walken’s character in the 1983 film Brainstorm, Google’s self-driving cars are operating inside a topographical dream state, we might say, seeing only what the headpiece allows them to see.

[Image: Navigating dreams within dreams: (top) from Brainstorm; (bottom) a Google self-driving car, via Google and re:form].

Briefly, recall a recent essay by Karen Levy and Tim Hwang called “Back Stage at the Machine Theater.” That piece looked at the atavistic holdover of old control technologies—such as steering wheels—in vehicles that are actually computer-controlled.

There is no need for a human-manipulated steering wheel, in other words, other than to offer a psychological point of focus for the vehicle’s passengers, to give them the feeling that they can still take over.

This is the “machine theater” that the title of their essay refers to: a dramaturgy made entirely of technical interfaces that deliberately produce a misleading illusion of human control. These interfaces are “placebo buttons,” they write, that transform all but autonomous technical systems into “theaters of volition” that still appear to be under manual guidance.

I mention this essay here because the Science piece with which this post began also explains that NASA’s rover program is being pushed toward a state of greater autonomy.

“One of Verma’s key research goals,” we read, “has been to give rovers greater autonomy to decide on a course of action. She is now working on a software upgrade that will let Curiosity be true to its name. It will allow the rover to autonomously select interesting rocks, stopping in the middle of a long drive to take high-resolution images or analyze a rock with its laser, without any prompting from Earth.”

[Image: Volitional portraiture on Mars; via NASA].

The implication here is that, as the Mars rover program becomes “self-driving,” it will also be transformed into a vast “theater of volition,” in Levy’s and Hwang’s formulation: that Earth-bound “drivers” might soon find themselves reporting to work simply to flip placebo levers and push placebo buttons as these vehicles go about their own business far away.

It will become more ritual than science, more icon than instrument—a strangely passive experience, watching a distant machine navigate simulated terrain models and software packages coextensive with the surface of Mars.

Greek Gods, Moles, and Robot Oceans

[Image: The Very Low Frequency antenna field at Cutler, Maine, a facility for communicating with at-sea submarine crews].

There have been about a million stories over the past few weeks that I’ve been dying to write about, but I’ll just have to clear through a bunch here in one go.

1) First up is a fascinating request for proposals from the Defense Advanced Research Projects Agency, or DARPA, who is looking to build a “Positioning System for Deep Ocean Navigation.” It has the handy acronym of POSYDON.

POSYDON will be “an undersea system that provides omnipresent, robust positioning” in the deep ocean either for crewed submarines or for autonomous seacraft. “DARPA envisions that the POSYDON program will distribute a small number of acoustic sources, analogous to GPS satellites, around an ocean basin,” but I imagine there is some room for creative maneuvering there.

The idea of an acoustic deep-sea positioning system that operates similar to GPS is pretty interesting to imagine, especially considering the strange transformations sound undergoes as it is transmitted through water. To establish accurately that a U.S. submarine has, in fact, heard an acoustic beacon and that its apparent distance from that point is not being distorted by intervening water temperature, ocean currents, or even the large-scale presence of marine life is obviously quite an extraordinary challenge.

As DARPA points out, without such a system in place, “undersea vehicles must regularly surface to receive GPS signals and fix their position, and this presents a risk of detection.” The ultimate goal, then, would be to launch ultra-longterm undersea missions, even establish permanently submerged robotic networks that have no need to breach the ocean’s surface. Cthulhoid, they will forever roam the deep.

[Image: An unmanned underwater vehicle; U.S. Navy photo by S. L. Standifird].

If you think you’ve got what it takes, click over to DARPA and sign up.

2) A while back, I downloaded a free academic copy of a fascinating book called Space-Time Reference Systems by Michael Soffel and Ralf Langhans.

Their book “presents an introduction to the problem of astronomical–geodetical space–time reference systems,” or radically offworld navigation reference points for when a craft is, in effect, well beyond any known or recognizable landmarks in space. Think of it as a kind of new longitude problem.

The book is filled with atomic clocks, quasars potentially repurposed as deep-space orientation beacons, the long-term shifting of “astronomical reference frames,” and page after page of complex math I make no claim to understand.

However, I mention this here because the POSYDON program is almost the becoming-cosmic of the ocean: that is, the depths of the sea reimagined as a vast and undifferentiated space within which mostly robotic craft will have to orient themselves on long missions. For a robotic submarine, the ocean is its universe.

3) The POSYDON program is just one part of a much larger militarization of the deep seas. Consider the fact that the U.S. Office of Naval Research is hoping to construct permanent “hubs” on the seafloor for recharging robot submarines.

These “hubs” would be “unmanned, underwater pods where robots can recharge undetected—and securely upload the intelligence they’ve gathered to Navy networks.” Hubs will be places where “unmanned underwater vehicles (UUVs) can dock, recharge, upload data and download new orders, and then be on their way.”

“You could keep this continuous swarm of UUVs [Unmanned Underwater Vehicles] wherever you wanted to put them… basically indefinitely, as long as you’re rotating (some) out periodically for mechanical issues,” a Naval war theorist explained to Breaking Defense.

The ultimate vision is a kind of planet-spanning robot constellation: “The era of lone-wolf submarines is giving away [sic] to underwater networks of manned subs, UUVs combined with seafloor infrastructure such as hidden missile launchers—all connected to each other and to the rest of the force on the surface of the water, in the air, in space, and on land.” This would include, for example, the “upward falling payloads” program described on BLDGBLOG a few years back.

Even better, from a military communications perspective, these hubs would also act as underwater relay points for broadcasting information through the water—or what we might call the ocean as telecommunications medium—something that currently relies on ultra-low frequency radio.

There is much more detail on this over at Breaking Defense.

4) Last summer, my wife and I took a quick trip up to Maine where we decided to follow a slight detour after hiking Mount Katahdin to drive by the huge antenna field at Cutler, a Naval communications station found way out on a tiny peninsula nearly on the border with Canada.

[Image: The antenna field at Cutler, Maine].

We talked to the security guard for a while about life out there on this little peninsula, but we were unable to get a tour of the actual facility, sadly. He mostly joked that the locals have a lot of conspiracy theories about what the towers are actually up to, including their potential health effects—which isn’t entirely surprising, to be honest, considering the massive amounts of energy used there and the frankly otherworldly profile these antennas have on the horizon—but you can find a lot of information about the facility online.

So what does this thing do? “The Navy’s very-low-frequency (VLF) station at Cutler, Maine, provides communication to the United States strategic submarine forces,” a January 1998 white paper called “Technical Report 1761” explains. It is basically an east coast version of the so-called Project Sanguine, a U.S. Navy program from the 1980s that “would have involved 41 percent of Wisconsin,” turning the Cheese State into a giant military antenna.

Cutler’s role in communicating with submarines may or may not have come to an end, making it more of a research facility today, but the idea that, even if this came to an end with the Cold War, isolated radio technicians on a foggy peninsula in Maine were up there broadcasting silent messages into the ocean that were meant to be heard only by U.S. submarine crews pinging around in the deepest canyons of the Atlantic is both poetic and eerie.

[Image: A diagram of the antennas, from the aforementioned January 1998 research paper].

The towers themselves are truly massive, and you can easily see them from nearby roads, if you happen to be anywhere near Cutler, Maine.

In any case, I mention all this because behemoth facilities such as these could be made altogether redundant by autonomous underwater communication hubs, such as those described by Breaking Defense.

5) “The robots are winning!” Daniel Mendelsohn wrote in The New York Review of Books earlier this month. The opening paragraphs of his essay are is awesome, and I wish I could just republish the whole thing:

We have been dreaming of robots since Homer. In Book 18 of the Iliad, Achilles’ mother, the nymph Thetis, wants to order a new suit of armor for her son, and so she pays a visit to the Olympian atelier of the blacksmith-god Hephaestus, whom she finds hard at work on a series of automata:

…He was crafting twenty tripods
to stand along the walls of his well-built manse,
affixing golden wheels to the bottom of each one
so they might wheel down on their own [automatoi] to the gods’ assembly
and then return to his house anon: an amazing sight to see.

These are not the only animate household objects to appear in the Homeric epics. In Book 5 of the Iliad we hear that the gates of Olympus swivel on their hinges of their own accord, automatai, to let gods in their chariots in or out, thus anticipating by nearly thirty centuries the automatic garage door. In Book 7 of the Odyssey, Odysseus finds himself the guest of a fabulously wealthy king whose palace includes such conveniences as gold and silver watchdogs, ever alert, never aging. To this class of lifelike but intellectually inert household helpers we might ascribe other automata in the classical tradition. In the Argonautica of Apollonius of Rhodes, a third-century-BC epic about Jason and the Argonauts, a bronze giant called Talos runs three times around the island of Crete each day, protecting Zeus’s beloved Europa: a primitive home alarm system.

Mendelsohn goes on to discuss “the fantasy of mindless, self-propelled helpers that relieve their masters of toil,” and it seems incredibly interesting to read it in the context of DARPA’s now even more aptly named POSYDON program and the permanent undersea hubs of the Office of Naval Research. Click over to The New York Review of Books for the whole thing.

6) If the oceanic is the new cosmic, then perhaps the terrestrial is the new oceanic.

The Independent reported last month that magnetically powered underground robot “moles”—effectively subterranean drones—could potentially be used to ferry objects around beneath the city. They are this generation’s pneumatic tubes.

The idea would be to use “a vast underground network of pipes in a bid to bypass the UK’s ever more congested roads.” The company’s name? What else but Mole Solutions, who refer to their own speculative infrastructure as a network of “freight pipelines.”

[Image: Courtesy of Mole Solutions].

Taking a page from the Office of Naval Research and DARPA, though, perhaps these subterranean robot constellations could be given “hubs” and terrestrial beacons with which to orient themselves; combine with the bizarre “self-burying robot” from 2013, and declare endless war on the surface of the world from below.

See more at the Independent.

7) Finally, in terms of this specific flurry of links, Denise Garcia looks at the future of robot warfare and the dangerous “secrecy of emerging weaponry” that can act without human intervention over at Foreign Affairs.

She suggests that “nuclear weapons and future lethal autonomous technologies will imperil humanity if governed poorly. They will doom civilization if they’re not governed at all.” On the other hand, as Daniel Mendelsohn points out, we have, in a sense, been dealing with the threat of a robot apocalypse since someone first came up with the myth of Hephaestus.

Garcia’s short essay covers a lot of ground previously seen in, for example, Peter Singer’s excellent book Wired For War; that’s not a reason to skip one for the other, of course, but to read both. See more at Foreign Affairs.

(Thanks to Peter Smith for suggesting we visit the antennas at Cutler).

Urban CAT Scan

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

The London-based ScanLab Projects, featured here many times before, have completed a new commission, this time from the British Postal Museum & Archive, to document the so-called “Mail Rail,” a network of underground tunnels that opened back in 1927.

As Subterranea Britannica explains, the tunnels were initially conceived as a system of pneumatic package-delivery tubes, an “atmospheric railway,” as it was rather fantastically described at the time, “by which a stationary steam engine would drive a large fan which could suck air out of an air tight tube and draw the vehicle towards it or blow air to push them away.”

That “vehicle” would have been a semi-autonomous wheeled cart bearing parcels for residents of Greater London.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

Alas, but unsurprisingly, this vision of an air-powered subterranean communication system for a vast metropolis of many millions of residents was replaced by a rail-based one, with narrow, packed-heavy cars running a system of tracks beneath the London streets.

Thus the Mail Rail system was born.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

While the story of the system itself is fascinating, it has also been told elsewhere.

The aforementioned Subterranea Britannica is a perfect place to start, but urban explorers have also gained entrance for narrative purposes of their own, including the long write-up over at Placehacking.

That link includes the incredible detail that, “on Halloween night 2010, ravers took over a massive derelict Post Office building in the city and threw an illegal party of epic proportions. When pictures from the party emerged, we were astonished to find that a few of them looked to be of a tiny rail system somehow accessed from the building.”

Surely, this should be the setting for a new novel: some huge and illegal party in an abandoned building at an otherwise undisclosed location in the city results in people breaking into or discovering an otherwise forgotten, literally underground network, alcohol-blurred photographs of which are later recognized as having unique urban importance.

Something is down there, the hungover viewers of these photographs quickly realize, something vague and hazily glimpsed in the unlit background of some selfies snapped at a rave.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

This would all be part of the general mysticism of infrastructure that I hinted at in an earlier post, the idea that the peripheral networks through which the city actually functions lie in wait, secretly connecting things from below or wrapping, Ouroborus-like, around us on the edges of things.

These systems are the Matrix, we might say in modern mythological terms, or the room where Zeus moves statues of us all around on chessboards: an invisible realm of tacit control and influence that we’ve come to know unimaginatively as nothing but infrastructure. But infrastructure is now the backstage pass, the esoteric world behind the curtain.

In any case, with this handful of party pictures in hand, a group of London explorers tried to infiltrate the system.

After hours of exploration, we finally found what we thought might be a freshly bricked up wall into the mythical Mail Rail the partygoers had inadvertently found… We went back to the car and discussed the possibility of chiselling the brick out. We decided that, given how soon it was after the party, the place was too hot to do that just now and we walked away, vowing to try again in a couple of months.

It took some time—but, eventually, it worked.

They found the tunnels.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

The complete write-up over at Placehacking is worth the read for the rest of that particular story.

But ScanLab now enter the frame as documentarians of a different sort, with a laser-assisted glimpse of this underground space down to millimetric details.

Their 3D point clouds afford a whole new form of representation, a kind of volumetric photography that cuts through streets and walls to reveal the full spatial nature of the places on display.

The incredible teaser video, pieced together from 223 different laser scanning sessions, reveals this with dramatic effect, featuring a virtual camera that smoothly passes beneath the street like a swimmer through the waves of the ocean.



As the British Postal Museum & Archive explains, the goal of getting ScanLab Projects down into their tunnels was “to form a digital model from which any number of future interactive, visual, animated and immersive experiences can be created.”

In other words, it was a museological project: the digital preservation of an urban underworld that few people—Placehacking‘s write-up aside—have actually seen.

For example, the Museum writes, the resulting laser-generated 3D point clouds might “enable a full 3D walkthrough of hidden parts of the network or an app that enables layers to be peeled away to see the original industrial detail beneath.”

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

Unpeeling the urban onion has never been so gorgeous as we leap through walls, peer upward through semi-transparent streets, and see signs hanging in mid-air from both sides simultaneously.

[Image: By ScanLab Projects, with permission from the British Postal Museum & Archive].

Tunnels become weird ropey knots like smoke rings looped beneath the city as the facades of houses take on the appearance of old ghosts, remnants of another era gazing down at the flickering of other dimensions previously lost in the darkness below.

(Thanks again to the British Postal Museum & Archive for permission to post the images).