Ground TV

[Image: An otherwise unrelated temple complex in Indonesia].

“Hardened lava from Indonesia’s Mount Merapi covers ancient temples in the historic city of Yogyakarta,” Archaeology News reports. As if fishing in the ground for lost architecture, “Scientists are using remote sensing equipment to locate them.”

The Jakarta Post elaborates, pointing out that “objects recently found underneath cold lava,” thus “requiring archeologists to use remote sensing equipment to find them,” remain physically ambiguous when they cannot be directly excavated. Indeed, “the equipment cannot determine precisely whether rock is part of a temple construction or not.” In some cases, then, it’s a question of forensic interpretation.

Nonetheless, five entire temples have been discovered so far, locked down there in old lava: the Morangan, Gampingan, Kadisoko, Sambisari and Kimpulan temples, “buried between 2 and 9 meters deep.” That’s nearly thirty feet of rock—a once-liquid landscape covering blurred remnants of an otherwise overwritten past, architectural history by way of subterranean remote-sensing.

I should point out, meanwhile, that Archaeology News also links to a quick story taking place out here in greater Los Angeles: a parking lot in Ventura, at the intersection of Palm and Main streets, is under archaeological investigation. “Researchers this week are crisscrossing the parking lot using ground-penetrating radar,” the Ventura County Star explains, “in search of anomalies below the asphalt that could be artifacts or building foundations from years past. Archaeologists will return to excavate by hand those areas believed to contain artifacts.”

I love the idea that the surface of a parking lot could become something like a new screen technology—a depth-cinema of lost evidence from earlier phases of human history, shining from within with archaeological remains as researchers walk back and forth above.

Imagine the archaeological cinema of the future—some massive open parking lot in Istanbul, say, where crowds arrive, milling about, tickets in hand, and then, like the giant LED screen from the Beijing Olympics, the city’s archaeological past is revealed in 3D: hologram-like structures shivering there inside the surface of the earth, below everyone’s feet in real-time, the planet become an immersive TV screen on which we can view the debris of history.

Weather Warfare

[Image: From Elements of War by Kalypso Media].

A forthcoming game called Elements of War takes weaponized weather-control as its central theme, “where armies manipulate the forces of nature to rain down destruction on their foes or gain a tactical advantage by transforming the battlefield with hurricanes, tornadoes and earthquakes.”

It is set in the United States in a period “after a secret military weather control experiment sets in motion a near-complete global climate collapse,” featuring “unconventional units” fighting “for control of fearsome weather-based weapons, granting them the power to use tornadoes, hurricanes, earthquakes, torrential rains and other forces of nature as weapons of war.”

[Image: From Elements of War by Kalypso Media].

The game comes out in February 2011, so I haven’t played it and am basing this solely on a recent press release; I thus can’t vouch for its actual execution or gameplay.

Nonetheless, I’m intrigued to see how the game’s “six weather-based weapons,” allowing players to “dominate and transform the battlefield with tornadoes, earthquakes, hurricanes and other elements of war, impacting supply lines, slowing troop movements and devastating the enemy,” work out.

[Image: From Elements of War by Kalypso Media].

The game promises “realistic destruction physics,” and I would hope that the weapons themselves—there is apparently an internal game list of what the designers call “‘What If’ Weaponry”—are actually interesting, and not just repurposed tanks, their cannons firing storms, or rifles shooting electrically-active lightning rounds or something similar.

In fact, the possibilities for genuinely reinvented tools of weather-warfare become pretty delirious after a point, whether it’s something as basic as shoulder-fired devices packed with microtornadic winds or whole fields sown with air-pressure bombs that generate inland hurricanes upon timed detonation.

Long-term seismic-resonance grenades; liquefaction earth-storms; Instant Glacier™ humidity-solidification traps; stationary magnetosphere-deflection architecture.

[Image: A “rain-making machine” via Modern Mechanix].

In an article I often cite here, originally published in The Wilson Quarterly, weather historian James Fleming explains that, as early as World War II, “some in the military had already recognized the potential uses of weather modification, and the subject has remained on military minds ever since. In the 1940s, General George C. Kenney, commander of the Strategic Air Command, declared, ‘The nation which first learns to plot the paths of air masses accurately and learns to control the time and place of precipitation will dominate the globe.'”

Fleming continues:

Howard T. Orville, President Dwight D. Eisenhower’s weather adviser, published an influential 1954 article in Collier’s that included a variety of scenarios for using weather as a weapon of warfare. Planes would drop hundreds of balloons containing seeding crystals into the jet stream. Downstream, when the fuses on the balloons exploded, the crystals would fall into the clouds, initiating rain and miring enemy operations. The Army Ordnance Corps was investigating another technique: loading silver iodide and carbon dioxide into 50-caliber tracer bullets that pilots could fire into clouds. A more insidious technique would strike at an adversary’s food supply by seeding clouds to rob them of moisture before they reached enemy agricultural areas. Speculative and wildly optimistic ideas such as these from official sources, together with threats that the Soviets were aggressively pursuing weather control, triggered what Newsweek called “a weather race with the Russians,” and helped fuel the rapid expansion of meteorological research in all areas, including the creation of the National Center for Atmospheric Research, which was established in 1960.

Many of these climatological strategies ultimately came together in the form of Operation Popeye, during the Vietnam War. As Fleming explains, “Operating out of Udorn Air Base, Thailand, without the knowledge of the Thai government or almost anyone else, but with the full and enthusiastic support of presidents Lyndon B. Johnson and Richard M. Nixon, the Air Weather Service flew more than 2,600 cloud seeding sorties and expended 47,000 silver iodide flares over a period of approximately five years at an annual cost of some $3.6 million.”

In any case, I could go on and on about weaponized climatology; for now, it seems no surprise that weather-weapons would be making their way as offensive tools into new computer games.

(Via Jim Rossignol; earlier on BLDGBLOG: Tactical Landscaping and Terrain Deformation).

Spatial Gameplay in Full-Court 3D

Japan is distinguishing its bid to host the 2022 World Cup with a plan to broadcast the entire thing as a life-size hologram.

[Image: Courtesy of the Japan Football Association/CNN].

“Japanese organizers say each game will be filmed by 200 high definition cameras, which will use ‘freeviewpoint’ technology to allow fans to see the action unfold from a player’s eye view—the kind of images until now only seen in video games,” CNN reports.

[Image: Courtesy of the Japan Football Association/CNN].

British football theorist Jonathan Wilson puts an interestingly spatial spin on the idea: “Speaking as a tactics geek,” he said to CNN, “the problem watching games on television is it’s very hard to see the shape of the teams, so if you’re trying to assess the way the game’s going, if you’re trying to assess the space, how a team’s shape’s doing and their defense and organization, then this will clearly be beneficial.”

Watching a sport becomes a new form of spatial immersion into strategic game geometries.

[Image: Courtesy of the Japan Football Association/CNN].

Of course, there’s open disbelief that Japan can actually deliver on this promise—it is proposing something based on technology that does not quite exist yet, on the optimistic assumption that all technical problems will be worked out in 12 years’ time.

But the idea of real-time, life-size event-holograms being beamed around the world as a spatial replacement for TV imagery is stunning.

(Thanks to Judson Hornfeck for the tip!)

Architecturally Armed

[Image: Photo by Vincent Fournier, courtesy of Wired UK].

This morning’s post about a robot-city on the slopes of Mount Fuji reminded me of this thing called the CyberMotion Simulator, operated by the Max Planck Institute for Biological Cybernetics in Germany (and featured in this month’s issue of Wired UK).

The Simulator, Wired writes, is “a RoboCoaster industrial robotic arm adapted and programmed to simulate an F1 Ferrari F2007.”

Testers are strapped into a cabin two metres above ground, and use a steering wheel, accelerator and brake to control CyberMotion. The simulator can provide accelerations of 2G and its display shows a 3D view of the circuit at Monza. The arm’s six axes allow for the replication of twists and turns on the track and can even turn the subjects upside down.

But I’m curious what everyday architectural uses such a robo-arm might have. An office full of moving cubicles held aloft by black robotic arms that lift, turn, and rotate each desk based on who the worker wants to talk to; mobile bedroom furniture for a depressed ex-astronaut; avant-garde set design for a new play in East London; a vertigo-treatment facility designed by Aristide Antonas; surveillance towers for traffic police in outer Tokyo; a hawk-watching platform in Fort Washington State Park.

You show up for your first day of high school somewhere in a Chinese colonial city in central Africa and find that everyone—in room after room, holding hundreds of people—is sitting ten feet off the ground in these weird and wormy chairs, dipping and weaving and reading Shakespeare.

The Robot A-Z

[Image: The yellow chipboards of the Fanuc global headquarters; courtesy of Fanuc].

On the flight back to Los Angeles yesterday I read about the corporate campus of Fanuc, “a secretive maker of robots and industrial automation gear,” according to Bloomberg Businessweek.

“Some 60 percent of the world’s precision machine tools use Fanuc’s controls,” the article explains, “which give lathes, grinders, and milling machines the agility to turn metal into just about any manufactured product.” As if suggesting a future art installation by Jeff Koons—sponsored by Boeing—we read about a man who uses “a milling machine with Fanuc controls to sculpt 747 parts.” (The company’s robot A-Z shows off their other goods).

[Image: Assembly robots by Fanuc].

But it’s the description of the firm’s actual facilities that caught my eye. “Fanuc‘s headquarters, a sprawling complex in a forest on the slopes of Mount Fuji, looks like something out of a sci-fi flick”:

Workers in yellow jumpsuits with badges on their shoulders trot among yellow buildings as yellow cars hum along pine-lined roads. Fanuc lore holds that the founder, Seiuemon Inaba, believed yellow “promotes clear thinking.”Inside the compound’s windowless factories, an army of (yes, yellow) robots works 24/7. “On a factory floor as big as a football field you might see four people. It’s basically just robots reproducing themselves.”

Thing is, if you want to see more—to see this strange origin-site for contemporary intelligent machines—you can’t. “Outsiders are rarely allowed inside the facility, and workers not engaged in research are barred from labs,” Businessweek adds. “‘I can’t even get in,’ quips a board member who asks that his name not be used.”

In a way, I’m reminded of South Korea’s plans for its own “Robot Land,” an “industrial city built specifically for the robotics industry,” that will have “all sorts of facilities for the research, development, and production of robots, as well as things like exhibition halls and even a stadium for robot-on-robot competitions.”

Here, though, alone amidst other versions of themselves in the pines of Mt. Fuji, “the world’s most reliable robots” take shape in secret, shelled in yellow, reproducing themselves, forming a robot city of their own.

Liquid Radio

Could temporary jets of seawater be used as functioning radio antennas? Apparently so: as PopSci reports, “communications are vital” for vessels at sea, but deck space for “all the large antennas necessary for long-range (and often encrypted) communications” can be hard to come by. “So U.S. Navy R&D lab SPAWAR Systems Center Pacific (SSC Pacific) engineered a clever scheme to turn the ocean’s most abundant resource into communications equipment, making antennas out of geysers of seawater.”

Using arcing vaultworks of oceanwater, like domesticated waves, to beam and receive encrypted telecommunications not only reduces the metal-load of ships—thus also reducing the radar profile of military vessels—it also offers a way to construct “a quick, temporary antenna that could just as easily be dismantled.”

What they [SPAWAR] came up with is little more than an electromagnetic ring and a water pump. The ring, called a current probe, creates a magnetic field through which the pump shoots a steam of seawater (the salt is a key ingredient, as the tech relies on the magnetic induction properties of sodium chloride). By controlling the height and width of the [stream], the operator can manipulate the frequency at which the antenna transmits and receives. An 80-foot-high stream can transmit and receive anywhere from 2 to 400 mHz, though much smaller streams can be used for varying other frequencies, ranging from HF through VHF to UHF.

Turning seawater into a temporary broadcast architecture is absolutely fascinating to me and has some extraordinary design implications for the future. Pirate radio stations made entirely from spiraling pinwheels of saltwater; cell-phone masts disguised as everyday displays spurting seasonally in public parks, from Moscow to Manhattan; TV towers replaced with Busby Berkeley-like aquatic extravaganzas, camouflaging the electromagnetic infrastructure of the city as a gigantic water garden.

[Image: A mountainous display of women closely choreographed with water by Busby Berkeley, via Alexander Trevi’s Pruned].

Given some salt, for instance, the Trevi Fountain could begin retransmitting mobile phone calls throughout the heat-rippling summer landscape of greater Rome. Ultra-refined specialty saltwaters offer dependable signal clarity in audio HD. La Machine de Marly becomes a buried industrial art project, beaming death metal salt hydrologies to garden visitors: a continuous fountain of thundering music on FM, headbanging to seawater hifi. Espionage conspiracies involving elaborate, deep-cover radio links hidden inside public fountains.

So how could this be further explored in the contexts of tidal river waters—Thames Radio!—rogue waves, and even tsunamis? The artistic, architectural, musical, and infrastructural misuse of this technology is something I very much look forward to hearing in the future.

The Museum of Speculative Archaeological Devices

Perhaps a short list of speculative mechanisms for future archaeological research would be interesting to produce.

[Image: A toy antique oscilloscope by Andrew Smith, courtesy of Gadget Master and otherwise unrelated to this post].

Ground-scanners, Transparent-Earth (PDF) eyeglasses, metal detectors, 4D earth-modeling environments used to visualize abandoned settlements, and giant magnets that pull buried cities from the earth.

Autonomous LIDAR drones over the jungles of South America. Fast, cheap, and out of control portable muon arrays. Driverless ground-penetrating radar trucks roving through the British landscape.

Or we could install upside-down periscopes on the sidewalks of NYC so pedestrians can peer into subterranean infrastructure, exploring subways, cellars, and buried streams. Franchise this to London, Istanbul, and Jerusalem, scanning back and forth through ruined foundations.

Holograph-bombs—ArchaeoGrenades™—that spark into life when you throw them, World of Warcraft-style, out into the landscape, and the blue-flickering ancient walls of missing buildings come to life like an old TV channel, hazy and distorted above the ground. Mechanisms of ancient light unfold to reveal lost architecture in the earth.

[Image: An LED cube by Pic Projects, otherwise unrelated to this post].

Or there could be football-field-sized milling machines that re-cut and sculpt muddy landscapes into the cities and towns that once stood above them. A peat-bog miller. Leave it operating for several years and it reconstructs whole Iron Age villages in situ.

Simultaneous milling/scanning devices that bring into being the very structures they claim to study. Ancient fortifications 3D-printed in realtime as you scan unreachable sites beneath your city’s streets.

Deep-earth projection equipment that impregnates the earth’s crust with holograms of missing cities, outlining three-dimensional sites a mile below ground; dazed miners stumble upon the shining walls of imaginary buildings like a laser show in the rocks around them.

Or a distributed iPhone app for registering and recording previously undiscovered archaeological sites (through gravitational anomalies, perhaps, or minor compass swerves caused by old iron nails, lost swords, and medieval dining tools embedded in the ground). Like SETI, but archaeological and directed back into the earth. As Steven Glaser writes in the PDF linked above, “We can image deep space and the formation of stars, but at present we have great difficulty imaging even tens of meters into the earth. We want to develop the Hubble into, not away from, the earth.”

Artificially geomagnetized flocks of migratory birds, like “GPS pigeons,” used as distributed earth-anomaly detectors in the name of experimental archaeology.

[Image: “GPS pigeons” by Beatriz da Costa, courtesy of Pruned].

So perhaps there could be two simultaneous goals here: to produce a list of such devices—impossible tools of future excavation—but also to design a museum for housing them.

What might a museum of speculative archaeological devices look like? A Mercer Museum for experimental excavation?

(Thanks to Rob Holmes and Alex Trevi for engaging with some of these ideas over email).

Trap Rooms

While finalizing my slides for tonight’s lecture at SCI-Arc, I was reading again about one of my favorite topics: trap streets, or deliberate cartographic errors introduced into a map so as to catch acts of copyright infringement by rival firms.

[Images: A “trap street” on Google Maps, documented by Luistxo eta Marije].

In other words, if a competitor’s map includes your “trap street”—a fictitious geographic feature that you’ve invented outright—then you (and your lawyers) will know that they’ve simply nicked your data, giving it a quick redesign and trying to pass it off as their own.

But this strategy of willful cartographic deception is not always limited to streets: there can be trap parks, trap ponds, trap buildings.

And trap rooms.

Earlier this week, I was reading about the rise of internal navigation apps for mobile phones, apps that will help you to find your way through otherwise bewildering internal environments. Large shopping malls, for instance, or unfamiliar subway stations.

From the New York Times:

A number of start-up companies are charting the interiors of shopping malls, convention centers and airports to keep mobile phone users from getting lost as they walk from the food court to the restroom. Some of their maps might even be able to locate cans of sardines in a sprawling grocery store.

Whichever company can upload the most floorplans before everyone else will, presumably, have quite an economic advantage. So how could you protect your proprietary map sets? What if you’re the only company in the world with access to maps of a certain convention center or sports stadium or new airport terminal—how could you keep a rival firm from simply jacking your cartography?

[Image: Photo by Laura Pedrick for The New York Times].

Introduce false information, perhaps: trap halls, trap stairs, trap attics, trap rooms. Nothing sinister—you don’t want people fleeing toward an emergency stairway that doesn’t exist in the event of a real-life fire—but why not an innocent janitorial closet somewhere or a freight elevator that no one could ever access in the first place? Why not a mysterious door to nowhere, or a small room that somehow appears to be within the very room you’re standing in?

It seems to be a mapping error—but it’s actually there for copyright protection. It’s a trap room.

On one level, I’m reminded of a minor detail from Joe Flood’s recent book The Fires, where we read that John O’Hagan, New York City’s Fire Commissioner, used to drive around town with blueprints of local buildings stored in the trunk of his car. If there was ever a fire in one of those structures, and his men would have to find their way through smoke-filled, confusing hallways, O’Hagan would have the maps. But is there a kind of Fire Department iPhone app? Could this be downloaded by everyday citizens and used in the event of emergency? What about a Seismic App for earthquake-prone cities like Los Angeles? Going into any building becomes a considerably safer thing to do, as your phone automagically downloads the relevant floorplans. Perhaps buildings known to be fire hazards, or known to be earthquake-unsafe, are somehow red-flagged as a warning before you step inside. (In such a context, the first person to become Mayor on foursquare of every earthquake-unsafe building in Los Angeles wins cult status amongst certain social groups).

But I’m also curious about less practical things, such as what cultural, even psychological, effects the presence of trap rooms might actually have. Games could be launched, the purpose of which is to find and occupy as many trap rooms as possible. New paranoias emerge, that the room featured above your apartment on the new app you just downloaded is not really there at all; it’s a trap room. You can’t sleep at night, worried that you actually have no neighbors, that you’re the last person on earth and every building around you is a dream. There are panic attacks and feelings of unreality, that no map can be trusted, that you’ve been living in a trap building all along. An Atlas of Trap Rooms is then released, with a foreword by Kevin Slavin.

These and other subtle geographies—trap architectures—awaiting detection all around us.

Of networked buildings and architectural neurology

[Image: A glimpse of Honda’s brain-interface technology].

I thought I’d jump into the ongoing conversation swirling around Tim Maly’s Cyborg Month—of which you can read more here—with some loose thoughts about what an architectural cyborg might be.

There have already been some significant stabs made in this direction over the past few weeks, including a brief look at “architecture machines”—that is, “evolving systems that worked in ‘symbiosis’ with designer and resident,” promising to “turn the design process into a dialogue that would alter the traditional human-machine dynamic” and thus opening up the possibility of cyborg architecture.

But my interests here are both more speculative and more neurological—specifically, looking at the wiring together of buildings and nervous systems, and the strange possibilities that might result. As such, I’ll be revisiting/rewriting some older posts here, tailoring them specifically for the context of Maly’s Cyborg Month.

[Image: Earthly extensions crawl on Mars; courtesy of NASA/JPL-Caltech].

1) A few years ago, two unrelated bits of news accidentally merged for me, their headlines crossing to surreal effect. First, we learned that monkeys were able to move a robotic arm “merely by thinking.” The arm, which included “working shoulder and elbow joints and a clawlike ‘hand’,” was controllable after “probes the width of a human hair were inserted into the neuronal pathways of the monkeys’ motor cortex.” This field of research is referred to as “mind-controlled robotic prosthetics”—but the mind in control here is not human.

Second, the New York Times reported that “NASA’s Phoenix Mars lander has successfully lifted its robotic arm” up there on the surface of another planet. “Testing the arm will take a few days,” we read, “and the first scoops of Martian soil are to be dug up next week.”

And while I know that these stories are in no way connected, putting them together is like something from the pages of Mike Mignola: monkeys locked in a room somewhere, controlling the arms of machines on other planets.

As if we might discover, at the end of the day, that NASA wasn’t a human organization at all—it was a bunch of rhesus monkeys locked in a lab somewhere, enthroned amidst wires and brain-caps, like some new sign of the Tarot, lost in private visions of machines on alien worlds. An experiment gone awry.

Their “dreams” at night are actually video feeds from probes moving through outer darkness.

[Image: A “Demon” unmanned aerial drone by BAE Systems, courtesy of Popular Science].

2) Among many other things in P.W. Singer’s highly recommended book Wired for War: The Robotics Revolution and Conflict in the Twenty-First Century is a brief comment about military research into the treatment of paralysis.

In a subsection called “All Jacked Up,” Singer refers to “a young man from South Weymouth, Massachusetts,” who was “paralyzed from the neck down in 2001.” After nearly giving up hope for recovery, “a computer chip was implanted into his head.”

The goal was to isolate the signals leaving [his] brain whenever he thought about moving his arms or legs, even if the pathways to those limbs were now broken. The hope was that [his] intent to move could be enough; his brain’s signals could be captured and translated into a computer’s software code.

The man’s doctors thus hook him up to a computer mouse and then to a TV remote control, and the wounded man was soon able not only to surf the web but to watch HBO.

What I can’t stop thinking about, however, is where this research “opens up some wild new possibilities for war,” as Singer writes.

In other words, the military has asked, why hook this guy up to a remote control TV when you could hook him up to an armed drone aircraft flying somewhere above Afghanistan? The soldier could simply pilot the plane with his thoughts.

This vision—of paralyzed soldiers thinking unmanned planes through distant theaters of war—is both terrible and stunning.

Singer goes on to describe DARPA‘s “Brain-Interface Project,” which hoped to teach paralyzed patients how to control machines via thought—and to do so in the service of the U.S. military.

Later in the book, Singer describes research into advanced, often robotic prostheses; “these devices are also being wired directly into the patient’s nerves,” he writes.

This allows the solder to control their artificial limbs via thought as well as have signals wired back into their peripheral nervous system. Their limbs might be robotic, but they can “feel” a temperature change or vibration.

When this is put into the context of the rest of Singer’s book—where we read, for instance, that “at least 45 percent of [the U.S. Air Force’s] future large bomber fleet [will be] able to operate without humans aboard,” with other “long-endurance” military drones soon “able to stay aloft for as long as five years,” and if you consider that, as Singer writes, the Los Angeles Police Department “is already planning to use drones that would circle over certain high-crime neighborhoods, recording all that happens”—you get into some very heady terrain, indeed. After all, the idea that those drone aircraft circling over Los Angeles in the year 2015 are actually someone’s else literal daydream both terrifies and blows me away.

On the other hand, if you can directly link the brain of a paralyzed soldier to a computer mouse—and then onward to a drone aircraft, and perhaps onward again to an entire fleet of armed drones circling over enemy territory—then surely you could also hook that brain up to, say, lawnmowers, remote-controlled tunneling machines, lunar landing modules, Mars rovers, strip-mining equipment, woodworking tools, and even 3D printers.

[Image: 3D printing, via Thinglab].

The idea of brain-controlled wireless digging machines, in particular, just astonishes me; at night you dream of tunnels—because you are actually in control of tunneling equipment as you sleep, operating somewhere beneath the surface of the planet.

A South African platinum mine begins to diverge wildly from known sites of mineral wealth, its excavations more and more abstract as time goes on—carving M.C. Escher-like knots and strange excursive whorls through ancient reefwork below ground—and it’s because the mining engineer, paralyzed in a car accident ten years ago and in control of the digging machines ever since, has become addicted to morphine.

Or perhaps this could even be used as a new and extremely avant-garde form of psychotherapy. For instance, a billionaire in Los Angeles hooks his depressed teenage son up to Herrenknecht tunneling equipment which has been shipped, at fantastic expense, to Antarctica. An unmappably complex labyrinth of subterranean voids is soon created; the boy literally acts out through tunnels. If rock is his paint, he is its Basquiat.

Instead of performing more traditional forms of Freudian analysis by interviewing the boy in person, a team of highly-specialized dream researchers is instead sent down into those artificial caverns, wearing North Face jackets and thick gloves, where they deduce human psychology from moments of curvature and angle of descent.

My dreams were a series of tunnels through Antarctica, the boy’s future headstone reads.

[Image: The hieroglyphic end of a Canadian potash mine; courtesy of AP/The Australian].

Returning to Singer, briefly, he writes that “Many robots are actually just vehicles that have been converted into unmanned systems”—so if we can robotize aircraft, digging machines, riding lawnmowers, and even heavy construction equipment, and if we can also directly interface the human brain to the controls of these now wireless robotic mechanisms, then the design possibilities seem limitless, surreal, and well worth exploring (albeit with great moral caution) in real life.

3) What, then, in this context, might an architectural cyborg be? While it’s tempting to outline a number of scenarios in which a human brain could be directly wired into, say, the elevator control room of a downtown high-rise, or into the traffic lights of a Chinese metropolis, this scenario could also be disturbingly reversed.

In other words, why have a building somehow controlled by a human brain, when a human brain could instead be controlled by a building?

Like something out of Michael Crichton’s Coma—or even the film Hannibal (NSFW and highly disturbing!)—future elevator banks in New York’s replacement World Trade Center cause wireless twitching in an otherwise bed-bound patient. That is, the patient moves because of the elevators, and not the other way around.

Imagine a zombie horror film, complete with stumbling hordes guided not by demonic hunger but by the malfunctioning HVAC system of a building outside town…

[Image: A circuit diagram].

At this point, though, I’d rather step back from these morally uncomfortable images and suggest instead that buildings connected to other buildings might form their own ersatz neurology: like the hacked brain of a military paralytic, one building’s elevators would actually control the elevators in another building.

Networked examples of this are easy enough to invent: the computer system of one building is cross-wired into the circuitous guts of another structure, be it a skyscraper, an airplane, a geostationary satellite, a moving truck, or an interstellar probe built by NASA (and why stop at buildings—why not networked plants?). The changing speeds of a building’s escalators become more like graphs: responding to—and thus diagramming—signals from a rover on Mars.

They are pieces of equipment, we might say, neurologically interfering with one another.

In many ways, this just takes us back to the cybernetic designs mentioned earlier, but it also leads to a general question: are two buildings hooked up to each other, in the most intimate ways, their HVACs purring in perfect co-harmony, responding to and controlling one another, each incomplete without its cross-wired partner, actually cyborgs?

For more posts in Tim Maly’s ongoing series, check out 50 Posts About Cyborgs.

Augmented Metropolis

Keiichi Matsuda, a recent graduate from the Bartlett School of Architecture, whose film Domestic Robocop was featured on BLDGBLOG several months ago, has a new project out: Augmented City. And it’s in 3D.

The film “focuses on the deprogramming of architecture and the spontaneous creation of customised, aggregated spaces,” Matsuda writes. We see its central protagonist surrounded by pop-up menus and projected touchscreens, able to switch urban backgrounds—graffiti to gardens—in an instant. From the project description:

The architecture of the contemporary city is no longer simply about the physical space of buildings and landscape, more and more it is about the synthetic spaces created by the digital information that we collect, consume and organise; an immersive interface may become as much part of the world we inhabit as the buildings around us.
Augmented Reality (AR) is an emerging technology defined by its ability to overlay physical space with information. It is part of a paradigm shift that succeeds Virtual Reality; instead of disembodied occupation of virtual worlds, the physical and virtual are seen together as a contiguous, layered and dynamic whole. It may lead to a world where media is indistinguishable from ‘reality’. The spatial organisation of data has important implications for architecture, as we re-evaluate the city as an immersive human-computer interface.

The film is even better, Matsuda points out, with 3D glasses. Watch it here, over at Vimeo, or on YouTube.

(Related: Transcendent City).

Optical Spelunking

[Image: The CAVE at the Desert Research Institute in Reno, now called the CAVCaM].

I mentioned a week or two ago that I had been out to Reno, Nevada, visiting, among other things, the Desert Research Institute, where Nicola Twilley of Edible Geography, Mark Smout of Smout Allen, and I began a roadtrip down to Los Angeles, through San Francisco—less a city than a peninsular amphitheater of conflicting microclimates—by way of the Virtual Reality CAVE that you see pictured here.

[Image: Daniel Coming, Principle Investigator of the CAVCaM, manipulates geometries that don’t exist, and we photograph him as he does so].

The facility is no longer called the CAVE, I should add; it’s now the CAVCaM, or Center for Advanced Visualization, Computation and Modeling. CAVCaM “strives to maintain a state-of-the-art visualization system, improve data collections, simulations, and analyses of scientific information from the environment.”

Advancements will create new capabilities for multidisciplinary research, produce top tier visualization environments for use by the broader scientific community, and offer opportunities to improve management decisions including prediction, planning, mitigation, and public education throughout Nevada and the world.

It also blows the minds of landscape theorists and practitioners in the process.

[Image: Touring virtual light].

In most of the photos here you see Matthew Coolidge from the Center for Land Use Interpretation, Bill Gilbert from the Land Arts of the American West program, and activist landscape historian and theorist Lucy Lippard all trying their hands at setting virtual forest fires, chasing digital terrains off cliffs, and navigating a world of overlapping proximities that sewed together around us like high-end neurological garmentry—a perfectly tailored world of pharaonic nonexistence, standing in tombs of imagery and light—to become almost seamlessly 3D. Glimpsing, in advance, possible afterlives of the optic nerve.

[Image: Cthulhoid satellites appear in space before you, rotating three-dimensionally in silence].

Of course, these photos also show the inteprid Dr. Daniel Coming, “Principle Investigator” of the CAVCaM—a fantastic job title, implying that this strange machinic environment that the DRI has built isn’t so much put to use, in a dry, straight-forward, functional way, but investigated, researched, explored. Daniel showed us all how to use the hand controls, putting on a display of virtual light and shadows. Objects that were never built, reflecting light that isn’t real.

We were all there on an invitation from the staff of the Nevada Museum of Art—who don’t appear in these photographs, but were absolutely key in making this tour happen.

[Images: Photos by BLDGBLOG and Nicola Twilley].

For whatever reason, meanwhile, that last photograph, above, featuring Matthew Coolidge, Bill Gilbert, and Lucy Lippard seemingly entranced—as we all were—by this new altarpiece of virtual surfaces, reminds me of the final lines from R.S. Thomas’s old poem “Once”:

Confederates of the natural day,
We went forth to meet the Machine.

Or perhaps it was the Machine that has come to meet us.

[Image: The CAVCaM reboots after a universe of simulation].