Building Digital with Timber, Mud, and Ice

[Image: From a project called “Slice” by HANNAH, as featured in FABRICATE 2020.]

The Bartlett School of Architecture recently put out two new books, freely available for download, FABRICATE 2020 and Design Transactions. Check them both out, as each is filled with incredibly interesting and innovative work.

Purely in the interests of time—by all means, download the books and dive in—I’ll focus on three projects rethinking the use of wood, clay, and ice, respectively, alongside new kinds of concrete formwork and 3D printing.

[Image: From “Slice” by HANNAH, as featured in FABRICATE 2020.]

For a project called “Slice,” Sasa Zivkovic and Leslie Lok of design firm HANNAH and Cornell University explore the use of “waste wood” killed by Emerald Ash Borer infestation.

[Image: From “Slice” by HANNAH, as featured in FABRICATE 2020.]

“Mature ash trees with irregular geometries present an enormous untapped material resource. Through high-precision 3D scanning and robotic fabrication on a custom platform, this project aims to demonstrate that such trees constitute a valuable resource and present architectural opportunities,” they explain.

[Images: From “Slice” by HANNAH, as featured in FABRICATE 2020.]

They continue on their website: “No longer bound to the paradigm of industrial standardization, this project revisits bygone wood craft and design based on organic, found and living materials. Robotic bandsaw cutting is paired with high-precision 3D scanning to slice bent logs from ash trees that are infested by the Emerald Ash Borer.”

I’m reminded of a point made by my wife, Nicola Twilley, in an article for The New Yorker last year about fighting wildfires in California. At one point, she describes attempts “to imagine the outlines of a timber industry built around small trees, rather than the big trees that lumber companies love but the forest can’t spare. In Europe, small-diameter wood is commonly compressed into an engineered product called cross-laminated timber, which is strong enough to be used in multistory structures.”

Seeing HANNAH’s work, it seems that perhaps another way to unlock the potential of small-diameter wood is through robotic bandsaw slicing.

[Image: From “Mud Frontiers” by Emerging Objects, as featured in FABRICATE 2020.]

For their project “Mud Frontiers,” Ronald Rael and Virginia San Fratello use 3D printing and “traditional materials (clay, water, and wheat straw), to push the boundaries of sustainable and ecological construction in a two phase project that explores traditional clay craft at the scale of architecture and pottery.”

[Image: From “Mud Frontiers” by Emerging Objects.]

“To do this,” they explain on their website, “we stepped out of the gallery and into the natural environment by constructing a low-cost, and portable robot, designed to be carried into a site where local soils could be harvested and used immediately to 3D print large scale structures.”

[Image: From “Mud Frontiers” by Emerging Objects.]

Finally—and, again, I would recommend just downloading the books and spending time with each, as I am barely scratching the surface here—we have a very cool project looking at “ice formwork” for concrete, developed by Vasily Sitnikov at the KTH Royal Institute of Technology in Stockholm.

[Image: Ice formwork for casting concrete, developed by Vasily Sitnikov, as featured in Design Transactions.]

Sitnikov’s method was initially devised as a way to save energy during the concrete-casting and construction process, but quickly revealed its own aesthetic and structural implications: “The variety of programmable functions for ice formwork is vast,” he writes, “across environmental design, programmable lighting conditions, acoustics, ventilation, insulation and structural-design weight-saving applications.”

[Image: Ice formwork for casting concrete, developed by Vasily Sitnikov.]

He has found, for example, that “spatial patterns… can be imposed on concrete, abandoning any use of petrochemicals in the fabrication process. Breaking away from the ‘solid’ image of conventional concrete, the technique of using ice as the formwork material enables the production of mesoscale spatial structures in concrete which would be impossible to manufacture with existing formwork materials.”

[Image: Ice formwork for casting concrete, developed by Vasily Sitnikov.]

Weaving, carving, cutting, molding: the two new Bartlett books have much, much more, including voluminous detail about each of the projects mentioned briefly above, so click on through and go wild: Design Transactions and FABRICATE 2020.

Synthetic at Every Scale

[Image: Diamond nanowires produced by physicist William Gilpin, used only for the purpose of illustration.]

As part of some early prep, just putting notes together for a workshop I’ll be leading in Moscow later this summer, I thought I’d link back to this 2014 post by Paul Gilster on Centauri Dreams about “SETI at the Particle Level”—that is, the Search for Extraterrestrial Intelligence reimagined on radically different spatial scales than what humans have previously looked for.

“To find the truly advanced civilizations, we would need to look on the level of the very small,” Gilster suggests. We perhaps even need to look at the scale of individual particles.

“If SETI is giving us no evidence of extraterrestrials,” Gilster writes, “maybe it’s because we’re looking on too large a scale.”

What if, in other words, truly advanced intelligence, having long ago taken to non-biological form, finds ways to maximize technology on the level of the very small? Thus [Australian artificial intelligence researcher Hugo de Garis]’s interest in femtotech, a technology at the level of 10-15 meters. The idea is to use the properties of quarks and gluons to compute at this scale, where in terms of sheer processing power the improvement in performance is a factor of a trillion trillion over what we can extrapolate for nanotech.

Material evidence of this speculative, femto-scale computation could perhaps be detected, in other words, if only we knew we should be looking for it. (Instead, of course, we’re stuck looking for evidence of a very particular technology that was big on Earth a few decades ago—radio waves.)

[Image: Electron interferometry, via the University of Cambridge, used only for the purpose of illustration.]

In any case, it’s interesting to put these thoughts in the context of a paper by Matt Edgeworth, published in Archaeologies back in 2010, called “Beyond Human Proportions: Archaeology of the Mega and the Nano.” Edgeworth’s paper was inspired by a deceptively simple insight: that human artifacts, in our era of chemical and material engineering, have departed radically from the spatial scale traditionally associated with archaeology.

We are always making history, we might say, but much of it is too small to see.

Rather than studying architectural ruins or sites the size of villages, what about archaeological artifacts visible only through chemical assays or scanning electron microscopes, whether they be so-called forever chemicals or simply microplastics?

Edgeworth himself refers to nano-scale transistors, graphene sheets, and materials etched using electron beam lithography. What role should these engineered materials—altogether different kinds of remains or cultural “ruins”—play in archaeology?

[Image: An example of electron beam lithography, via Trevor Knapp/Eriksson Research Group/University of Wisconsin, used only for the purpose of illustration.]

“It used to be the case that archaeological features and artifacts were principally on a human scale,” Edgeworth writes. “But that familiar world is changing fast. As archaeology extends its range of focus further forward in time its subject matter is moving beyond human proportions. Developments in macro- and micro-engineering mean that artifacts are no longer limited in size by physical limitations of the body. As scale and impact of material culture extends outwards and inwards in both macroscopic and microscopic directions, the perspectives of contemporary archaeology must change in order to keep track.”

What’s so interesting about both the Centauri Dreams post and Matt Edgeworth’s paper is that signs of artificiality—whether they are human or not—might be discovered at radically different spatial scales, either here on Earth in modern archaeological sites or in the depths of space, where, for example, the alien equivalent of electron beam lithography might already have etched legible patterns into materials now drifting as micrometeoroids through the void.

Of course, the idea of applying for a grant to look for signs of alien lithography on micrometeoroids sounds more like a Saturday Night Live sketch—or perhaps the plot of a Charles Stross novel—but that doesn’t mean we shouldn’t do it (or something similar). After all, even humans themselves now leave micro- and nano- scale material traces behind in the dyes, chemicals, coatings, and etched materials we use everyday without thinking of these things as archaeological.

[Image: Nanostructures made by German company Nanoscribe, used only for the purpose of illustration.]

If the fundamental assumption of SETI is that aliens have been communicating with each other through radio transmissions because humans used to heavily rely upon that same technology, then why not also assume that aliens are, say, manufacturing graphene sheets, 3D-printing on the nano-scale, or, for that matter, weaving computational textiles with synthetic-diamond nanowires?

(An unrelated post that is nevertheless interesting to think about in this context: Space Grain.)

The Spatial Politics of Geofencing

[Image: From Code of Conscience.]

Another project I meant to write about ages ago is Code of Conscience, developed by AKQA. It is “an open source software update that restricts the use of heavy-duty vehicles in protected land areas,” or what they call “a cyber shield around natural reserves.”

The basic idea is to install geofencing limits on heavy construction and logging equipment, based on “data from the United Nations’ World Database on Protected Areas, constantly updated by NGOs, governments and local communities. Using vehicle on-board GPS, the code detects when a protected area has been breached. When a machine enters a protected area, the system automatically restricts its use.”

There’s a bit more to read about the project over at AKQA, including the group’s strategy for getting the software out to global construction firms, from John Deere to Caterpillar, but one of the most interesting points of conversation for me here is simply the very idea of geofencing used as a political solution for problems that seem to exceed the capabilities of legislation. And, of course, how geofencing could be used to develop positive new tools for landscape conservation—as we see here—or much darker, nefarious techniques for political domination in the near-future.

You can easily imagine, for example, a dystopian scenario in which geofenced medical prostheses cease to operate when they cross an invisible GPS boundary into an unserviced region—perhaps as a way to protect the host company from the illegal installation of black-market, security-compromised firmware updates, but with immediate and perhaps fatal health effects on the user. Or, say, regions of a metropolis—perhaps near centers of governance or military installations—where civilian vehicles or unregistered photographic equipment of a particular resolution can no longer physically function.

Just as easily, you could imagine something like the spatial opposite of Code of Conscience, where, for example, future GPS-tagged hunting rifles only work when they are located inside permitted wilderness areas. The instant you step outside the field or forest, your gun goes dead.

In any case, you could no doubt write an entire book of short stories that consist only and entirely of such scenarios—the geofenced future of legal probation and house-arrest, for example, or dating apps that only work inside particular rooms or buildings. But one of of the most interesting things about Code of Conscience is simply how it attempts to imagine geofencing as a positive political tool, a new technique for landscape and cultural preservation both, and how the project thus joins a larger, ongoing conversation today about political geography seen through a new, technical lens.

(Earlier on BLDGBLOG: The Electromagnetic Fortification of the Suburbs and Geofencing and Investigatory Watersheds.)


[Image: Via Space Saloon].

For the second year in a row, Space Saloon’s Fieldworks program will take place out in the Morongo Valley, in the California desert near both the San Andreas Fault and Joshua Tree National Park.

Fieldworks bills itself as an “experimental design-build festival,” hosted by a “traveling group that investigates perceptions of place.” The program includes guest lectures, hands-on workshops in digital site-documentation, charrettes, and an eventual build-out of a few pavilion-like proposals.

[Image: Via Space Saloon].

You can read more at the Fieldworks website, including this useful FAQ, but it looks like a great opportunity to get your hands dirty in an extraordinary landscape only two hours or so outside Los Angeles.

Click through for the registration page.

Technology, Prehistory, Humanity

[Image: Still from 2001].

For those of you in the Bay Area, the Berkeley Center for New Media is hosting an event on April 3rd that sounds worth checking out. “The Human Computer in the Stone Age: Technology, Prehistory, and the Redefinition of the Human after World War II” is a talk by historian Stefanos Geroulanos. From the event description:

After World War II, new concepts and metaphors of technology helped transform the understanding of human history all the way back to the australopithecines. Using concepts from cybernetics and information theory as much as from ethnology and osteology, scientists and philosophers reorganized the fossil record using a truly global array of fossils, and in the process fundamentally re-conceptualized deep time, nature, and the assemblage that is humanity itself. This paper examines three ways in which technological prehistory, that most distant, speculative, and often just weird field, came to reorganize the ways European and American thinkers and a lay public thought about themselves, their origins, and their future.

This obviously brings to mind the early work of Bernard Stiegler, whose Technics and Time, 1 remains both difficult and worth the read.

In any case, if you happen to attend, let me know how it goes.

(In the unlikely event that you share my taste in electronic music, you might choose to prepare for this lecture by listening to Legowelt’s otherwise unrelated track, “Neolithic Computer.”)

Robot War and the Future of Perceptual Deception

[Image: A diagram of the accident site, via the Florida Highway Patrol].

One of the most remarkable details of last week’s fatal collision, involving a tractor trailer and a Tesla electric car operating in self-driving mode, was the fact that the car apparently mistook the side of the truck for the sky.

As Tesla explained in a public statement following the accidental death, the car’s autopilot was unable to see “the white side of the tractor trailer against a brightly lit sky”—which is to say, it was unable to differentiate the two.

The truck was not seen as a discrete object, in other words, but as something indistinguishable from the larger spatial environment. It was more like an elision.

Examples like this are tragic, to be sure, but they are also technologically interesting, in that they give momentary glimpses of where robotic perception has failed. Hidden within this, then, are lessons not just for how vehicle designers and computers scientists alike could make sure this never happens again, but also precisely the opposite: how we could design spatial environments deliberately to deceive, misdirect, or otherwise baffle these sorts of semi-autonomous machines.

For all the talk of a “robot-readable world,” in other words, it is interesting to consider a world made deliberately illegible to robots, with materials used for throwing off 3D cameras or LiDAR, either through excess reflectivity or unexpected light-absorption.

Last summer, in a piece for New Scientist, I interviewed a robotics researcher named John Rogers, at Georgia Tech. Rogers pointed out that the perceptual needs of robots will have more and more of an effect on how architectural interiors are designed and built in the first place. Quoting that article at length:

In a detail that has implications beyond domestic healthcare, Rogers also discovered that some interiors confound robots altogether. Corridors that are lined with rubber sheeting to protect against damage from wayward robots—such as those in his lab—proved almost impossible to navigate. Why? Rubber absorbs light and prevents laser-based navigational systems from relaying spatial information back to the robot.
Mirrors and other reflective materials also threw off his robots’ ability to navigate. “It actually appeared that there was a virtual world beyond the mirror,” says Rogers. The illusion made his robots act as if there were a labyrinth of new rooms waiting to be entered and explored. When reflections from your kitchen tiles risk disrupting a robot’s navigational system, it might be time to rethink the very purpose of interior design.

I mention all this for at least two reasons.

1) It is obvious by now that the American highway system, as well as all of the vehicles that will be permitted to travel on it, will be remade as one of the first pieces of truly robot-legible public infrastructure. It will transition from being a “dumb” system of non-interactive 2D surfaces to become an immersive spatial environment filled with volumetric sign-systems meant for non-human readers. It will be rebuilt for perceptual systems other than our own.

2) Finding ways to throw-off self-driving robots will be more than just a harmless prank or even a serious violation of public safety; it will become part of a much larger arsenal for self-defense during war. In other words, consider the points raised by John Rogers, above, but in a new context: you live in a city under attack by a foreign military whose use of semi-autonomous machines requires defensive means other than—or in addition to—kinetic firepower. Wheeled and aerial robots alike have been deployed.

One possible line of defense—among many, of course—would be to redesign your city, even down to the interior of your own home, such that machine vision is constantly confused there. You thus rebuild the world using light-absorbing fabrics and reflective ornament, installing projections and mirrors, screens and smoke. Or “stealth objects” and radar-baffling architectural geometries. A military robot wheeling its way into your home thus simply gets lost there, stuck in a labyrinth of perceptual convolution and reflection-implied rooms that don’t exist.

In any case, I suppose the question is: if, today, a truck can blend-in with the Florida sky, and thus fatally disable a self-driving machine, what might we learn from this event in terms of how to deliberately confuse robotic military systems of the future?

We had so-called “dazzle ships” in World War I, for example, and the design of perceptually baffling military camouflage continues to undergo innovation today; but what is anti-robot architectural design, or anti-robot urban planning, and how could it be strategically deployed as a defensive tactic in war?

The Architecture of Delay vs. The Architecture of Prolongation

[Image: A rendering of the “Timeship” cryogenic facility by architect Stephen Valentine, via New Scientist].

The primary setting of Don DeLillo’s new novel, Zero K, is a cryogenic medical facility in the mountainous deserts of Central Asia. There we meet a family that is, in effect, freezing itself, one by one, for reawakening in a speculative second life, in some immortally self-continuous version of the future.

First the mother goes; then the father, far before his time, willfully and preemptively ending things out of loneliness; next would be the son, the book’s ostensible protagonist, if he didn’t arrive with so many reservations about the procedure. Either way, it’s a question of what it means to delay one thing while prolonging another—to preserve one state as a means of preventing another from setting in. One is a refusal to let go of something you already possess; the other is a refusal to accept something you don’t yet have. An addiction to comfort vs. a fear of the new.

Without getting into too many of the book’s admittedly sparse details, it suffices to say that Zero K continues many of DeLillo’s most consistent themes—finance (Cosmopolis), apocalyptic religion (Mao II), the symbolic allure of mathematical analysis (Ratner’s Star).

What makes the book worth a mention here are some of the odder details of this cryogenic compound. It is a monumental space, described with references both to grand scientific and medical facilities—think the Salk Institute, perhaps—as well as to postmodern religious centers, this desert megachurch of the secular afterlife.

Yet its strangest details come from the site’s peripheral ornamentation: there are artificial gardens, for example, filled with resin-based and plastic plant life, and there is a surreal distribution of lifeless mannequins throughout the grounds, standing in penitential silence amongst the fake greenery. Unliving, they cannot die.

These stylized representations of biology, or replicant life forms that come across more like mockery than mimicry, expand the novel’s central conceit of frozen life—life reduced to absolute stillness, placed on pause, in hibernation, in temporal limbo, preserved—out into the landscape itself. It is an obvious symbolism, which is one of the book’s shortcomings; these deathless gardens with their plastic guards remain creepily poetic, nonetheless. These can also be seen as fittingly cynical flourishes for a facility founded on loose talk of singularities, medical resurrection, and quote-unquote human consciousness, as if even the designers themselves were in on the joke.

Briefly, despite my lukewarm feelings about the actual novel, I should say that I really love the title, Zero K. It is, of course, a thermal description—or zero K, zero kelvin, absolute zero, cryogenic perfection. Yet it is also refers to an empty digital file—zero k, zero kb—or, perhaps more accurately, a file saved with nothing in it, thus seemingly a quiet authorial nod to the idea that absolutely nothing about these characters is being saved, or preserved, in their quest for immortality. And it is also a nicely cross-literary reference to Frank Kafka’s existential navigator of European political absurdity, Josef K. or just K. From Josef K. to Zero K, his postmodern replacement.

The title, then, is brilliant—and the mannequins and the plastic plant life found at an end-times cryogenic facility in Central Asia make for an amazing set-up—but it’s certainly not one of DeLillo’s strongest books. In fact, I have been joking to people that, if you really want to read a novel this summer written by an aging white male cultural figure known for his avant-garde aesthetics, consider picking up Consumed, David Cronenberg’s strange, possibly too-Ballardian novel about murder, 3D printing, North Korean kidnapping squads, and more, rather than Zero K (or, of course, read both).

In any case, believe it or not, this all came out of the fact that I was about to tweet a link to a long New Scientist article about a cryogenic facility under construction in Texas when I realized that I had more to say than just 140 characters (Twitter, I have found, is actually a competitor to your writing masquerading as an enabler of it—alas, something I consistently re-forget).

There, Helen Thompson takes us to a place called Comfort, Texas.

[Image: Rendering of the “Timeship” facility by architect Stephen Valentine].

“The scene from here is surreal,” Thompson writes. “A lake with a newly restored wooden gazebo sits empty, waiting to be filled. A pregnant zebra strolls across a nearby field. And out in the distance some men in cowboy hats are starting to clear a huge area of shrub land. Soon the first few bricks will be laid here, marking the start of a scientific endeavour like no other.” A “monolithic building” is under construction in Comfort, and it will soon be “the new Mecca of cryogenics.”

Called Timeship, the monolithic building will become the world’s largest structure devoted to cryopreservation, and will be home to thousands of people who are neither dead nor alive, frozen in time in the hope that one day technology will be able to bring them back to life. And last month, building work began.

The resulting facility will include “a building that would house research laboratories, DNA from near-extinct species, the world’s largest human organ biobank, and 50,000 cryogenically frozen bodies.”

The design of the compound is not free of the sort of symbolic details we saw in DeLillo’s novel. Indeed, Thompson explains, “Parts of the project are somewhat theatrical—backup liquid nitrogen storage tanks are covered overhead by a glass-floored plaza on which you can walk surrounded by a fine mist of clouds—others are purely functional, like the three wind turbines that will provide year-round back-up energy.” And then there’s that pregnant zebra.

[Image: An otherwise totally unrelated photo of a circuit, chosen simply for its visual resemblance to the mandala/temple/resurrection facility in Texas; via DARPA].

It’s a long feature, worth reading in full—so click over to New Scientist to check it out—but what captivates me here is the notion that a sufficiently advanced scientific facility could require an architectural design that leans more toward religious symbolism.

What are the criteria, in other words, by which an otherwise rational scientific undertaking—conquering death? achieving resurrection? simulating the birth of the universe?—can shade off into mysticism and poetry, into ritual and symbolism, into what Zero K refers to as “faith-based technology,” and what architectural forms are thus most appropriate for housing it?

In fact, DeLillo presents a political variation on this question in Zero K. At one point, the book’s narrator explains, looking out over the cryogenic facility, “I wondered if I was looking at the controlled future, men and women being subordinated, willingly or not, to some form of centralized command. Mannequined lives. Was this a facile logic? I thought about local matters, the disk on my wristband that tells [the facility’s administrators], in theory, where I am at all times. I thought about my room, small and tight but embodying an odd totalness. Other things here, the halls, the veers, the fabricated garden, the food units, the unidentifiable food, or when does utilitarian become totalitarian.” When does utilitarian become totalitarian.

When do scientific undertakings become religious movements? When does minimalism become a form of political control?

Immersive and Oceanic

By now you’ve no doubt seen Hyper-Reality, the new short film produced by visualization wunderkind Keiichi Matsuda, whose early video experiments, produced while still a student at the Bartlett School of Architecture, I posted about here a long while back.

As you can see in the embedded video, above, Matsuda’s film is a POV exploration of information overload, identity gamification, and the mass burial of public space beneath impenetrable curtains of privately relevant, interactive marketing data, all cranked up to the level of cacophony; when it all shuts off at one point, leaving viewers stranded in a nearly silent, everyday supermarket, the effect is almost therapeutic, an intensely relieving escape back to cognition free from popup ads.

[Image: From Hyper-Reality by Keiichi Matsuda].

I was reminded of Matsuda’s film, however, by the recent news that so-called heads-up displays, or HUDs, are coming to an underwater experience near you: the U.S. Navy has developed an augmented reality helmet for undersea missions.

This unique system enables divers to have real-time visual display of everything from sector sonar (real-time topside view of the diver’s location and dive site), text messages, diagrams, photographs and even augmented reality videos. Having real-time operational data enables them to be more effective and safe in their missions—providing expanded situational awareness and increased accuracy in navigating to a target such as a ship, downed aircraft, or other objects of interest.

Wandering among enemy seamounts, swimming through immersive 3-dimensional visualizations of currents and tides, watching instructional videos for how to infiltrate an adversary’s port defenses, the U.S. Navy attack crews of the near-future will be like characters in an aquatic Hyper-Reality, negotiating drop-down menus and the threat of moray eels simultaneously.

[Image: From Hyper-Reality by Keiichi Matsuda].

This raises the question of how future landscape architects, given undersea terrains as a possible target of design, might use augmented reality on the seabed.

Recall the preservation program underway today in the Baltic Sea, whereby historically valuable shipwrecks are being given interpretive signage to remind people—that is, possible looters—that what they are seeing down there is not mere debris. They are, in effect, swimming amidst an open-water museum, a gallery of the lost and sunken.

So here’s to someone visualizing the augmented reality underwater shipwreck museum of tomorrow, narratives of immersive data gone oceanic.

A Window “Radically Different From All Previous Windows”

LIGO[Image: The corridors of LIGO, Louisiana, shaped like a “carpenter’s square”; via Google Earth].

It’s been really interesting for the last few weeks to watch as rumors and speculations about the first confirmed detection of gravitational waves have washed over the internet—primarily, at least from my perspective, because my wife, Nicola Twilley, who writes for The New Yorker, has been the only journalist given early access not just to the results but, more importantly, to the scientists behind the experiment, while writing an article that just went live over at The New Yorker.

It has been incredibly exciting to listen-in on partial conversations and snippets of overheard interviews in our home office here, as people like Kip Thorne, Rainer Weiss, and David Reitze, among a dozen others, all explained to her exactly how the gravitational waves were first detected and what it means for our future ability to study and understand the cosmos.

All this gloating as a proud husband aside, however, it’s a truly fascinating story and well worth mentioning here.

LIGO—the Laser Interferometer Gravitational-Wave Observatory—is a virtuoso act of precision construction: a pair of instruments, separated by thousands of miles, used to detect gravitational waves. They are shaped like “carpenter’s squares,” we read, and they stand in surreal, liminal landscapes: surrounded by water-logged swampland in Louisiana and “amid desert sagebrush, tumbleweed, and decommissioned reactors” in Hanford, Washington.

Ligo-Hanford [Image: LIGO, Hanford; via Google Earth].

Each consists of vast, seismically isolated corridors and finely calibrated super-mirrors between which lasers reflect in precise synchrony. These hallways are actually “so long—nearly two and a half miles—that they had to be raised a yard off the ground at each end, to keep them lying flat as Earth curved beneath them.”

To achieve the necessary precision of measurement, [Rainer Weiss, who first proposed the instrument’s construction] suggested using light as a ruler. He imagined putting a laser in the crook of the “L.” It would send a beam down the length of each tube, which a mirror at the other end would reflect back. The speed of light in a vacuum is constant, so as long as the tubes were cleared of air and other particles, the beams would recombine at the crook in synchrony—unless a gravitational wave happened to pass through. In that case, the distance between the mirrors and the laser would change slightly. Since one beam was now covering a shorter distance than its twin, they would no longer be in lockstep by the time they got back. The greater the mismatch, the stronger the wave. Such an instrument would need to be thousands of times more sensitive than any before it, and it would require delicate tuning, in order to extract a signal of vanishing weakness from the planet’s omnipresent din.

LIGO is the most sensitive instrument ever created by human beings, and its near-magical ability to pick up the tiniest tremor in the fabric of spacetime lends it a fantastical air that began to invade the team’s sleep. As Frederick Raab, director of the Hanford instrument, told Nicola, “When these people wake up in the middle of the night dreaming, they’re dreaming about the detector.”

Because of this hyper-sensitivity, its results need to be corrected against everything from minor earthquakes, windstorms, and passing truck traffic to “fluctuations in the power grid,” “distant lightning storms,” and even the howls of prowling wolves.

When the first positive signal came through, the team was actually worried it might not be a gravitational wave at all but “a very large lightning strike in Africa at about the same time.” (They checked; it wasn’t.)

Newton[Image: “Newton” (1795-c.1805) by William Blake, courtesy of the Tate].

The big deal amidst all this is that being able to study gravitational waves is very roughly analogous to the discovery of radio astronomy—where gravitational wave astronomy has the added benefit of opening up an entirely new spectrum of observation. Gravitational waves will let us “see” the fabric of spacetime in a way broadly similar to how we can “see” otherwise invisible radio emissions in deep space.

From The New Yorker:

Virtually all that is known about the universe has come to scientists by way of the electromagnetic spectrum. Four hundred years ago, Galileo began exploring the realm of visible light with his telescope. Since then, astronomers have pushed their instruments further. They have learned to see in radio waves and microwaves, in infrared and ultraviolet, in X-rays and gamma rays, revealing the birth of stars in the Carina Nebula and the eruption of geysers on Saturn’s eighth moon, pinpointing the center of the Milky Way and the locations of Earth-like planets around us. But more than ninety-five per cent of the universe remains imperceptible to traditional astronomy… “This is a completely new kind of telescope,” [David] Reitze said. “And that means we have an entirely new kind of astronomy to explore.”

Interestingly, in fact, my “seeing” metaphor, above, is misguided. As it happens, the gravitational waves studied by LIGO in its current state—ever-larger and more powerful new versions of the instrument are already being planned—“fall within the range of human hearing.”

If you want to hear spacetime, there is an embedded media player over at The New Yorker with a processed snippet of the “chirp” made by the incoming gravitational wave.

In any case, I’ve already gone on at great length, but the article ends with a truly fantastic quote from Kip Thorne. Thorne, of course, achieved minor celebrity last year when he consulted on the physics for Christopher Nolan’s relativistic time-travel film Interstellar, and he is not lacking for imagination.

Thorne compares LIGO to a window (and my inner H.P. Lovecraft reader shuddered at the ensuing metaphor):

“We are opening up a window on the universe so radically different from all previous windows that we are pretty ignorant about what’s going to come through,” Thorne said. “There are just bound to be big surprises.”

Go read the article in full!

“Building with metals not from Earth”

I missed the story last month that a company called Planetary Resources had successfully 3D-printed a small model using “metals not from Earth”—that is, metal harvested from a meteorite: “Transforming a chunk of space rock into something you can feed into a 3D printer is a pretty odd process. Planetary Resources uses a plasma that essentially turns the meteorite into a cloud that then ‘precipitates’ metallic powder that can be extracted via a vacuum system. ‘It condenses like rain out of a cloud,’ said [a company spokesperson], ‘but instead of raining water, you’re raining titanium pellets out of an iron nickel cloud.’ (…) ‘Everyone has probably seen an iron meteorite in a museum, now we have the tech to take that material and print it in a metal printer using high energy laser. Imagine if we could do that in space.’”

Landscapes of Data Infection

seeds[Image: An otherwise unrelated seed x-ray from the Bulkley Valley Research Centre].

There’s a fascinating Q&A in a recent issue of New Scientist with doctor and genetic researcher Karin Ljubic Fister.

Fister studies “plant-based data storage,” which relies on a combination of artificially modified genes, bacteria, and “infected” tobacco plants.

Comparing genetic programming with binary code, Fister explains that, “First you need a coding system. A computer program is basically a sequence of 0s and 1s, so we transformed this into the four DNA ‘letters’—A, G, C and T—by turning 00 into A, 10 into C, 01 into G and 11 into T. Then we synthesised the resulting DNA sequence. We transferred this artificial DNA into a bacterium and infected the leaf of a tobacco plant with it. The bacterium transfers this artificial DNA into the plant.”

Even better, the resulting “infection” is heritable: “We took a cutting of the infected leaf, planted it, and grew a full tobacco plant from it. This is essentially cloning, so all the leaves of this new plant, and its seeds, contained the ‘Hello World’ program encoded in their DNA.” The plants thus constitute an archive of data.

In fact, Fister points out that “all of the archives in the world could be stored in one box of seeds.” Now put that box of seeds in the Svalbard Global Seed Vault, she suggests, and you could store all the world’s information for thousands of years. Seed drives, not hard drives.

It’s worth reading the Q&A in full, but she really goes for it at the end, pointing out at least two things worth highlighting here.

saguaros[Image: “Higashiyama III” (1989) by Kozo Miyoshi, courtesy University of Arizona Center for Creative Photography; via but does it float].

One is that specialized botanical equipment could be used as a technical interface to “read” the data stored in plants. The design possibilities here are mind-boggling—and, in fact, are reminiscent of the Landscape Futures exhibition—and they lead directly to Fister’s final, amazing point, which is that this would, of course, have landscape-scale implications.

After all, you could still actually sow these seeds, populating an entire ecosystem with data plants: archives in the form of forests.

“Imagine walking through a park that is actually a library,” she says, “every plant, flower and shrub full of archived information. You sit down on a bench, touch your handheld DNA reader to a leaf and listen to the Rolling Stones directly from it, or choose a novel or watch a documentary amid the greenery.” Information ecosystems, hiding in plain sight.