Hard Drives, Not Telescopes

[Image: Via @CrookedCosmos].

More or less following on from the previous post, @CrookedCosmos is a Twitter bot programed by Zach Whalen, based on an idea by Adam Ferriss, that digitally manipulates astronomical photography.

It describes itself as “pixel sorting the cosmos”: skipping image by image through the heavens and leaving behind its own idiosyncratic scratches, context-aware blurs, stutters, and displacements.

[Image: Via @CrookedCosmos].

While the results are frequently quite gorgeous, suggesting some sort of strange, machine-filtered view of the cosmos, the irony is that, in many ways, @CrookedCosmos is simply returning to an earlier state in the data.

After all, so-called “images” of exotic celestial phenomena often come to Earth not in the form of polished, full-color imagery, ready for framing, but as low-res numerical sets that require often quite drastic cosmetic manipulation. Only then, after extensive processing, do they become legible—or, we might say, art-historically recognizable as “photography.”

Consider, for example, what the data really look like when astronomers discover an exoplanet: an almost Cubist-level of abstraction, constructed from rough areas of light and shadow, has to be dramatically cleaned up to yield any evidence that a “planet” might really be depicted. Prior to that act of visual interpretation, these alien worlds “only show up in data as tiny blips.”

In fact, it seems somewhat justifiable to say that exoplanets are not discovered by astronomers at all; they are discovered by computer scientists peering deep into data, not into space.

[Image: Via @CrookedCosmos].

Deliberately or not, then, @CrookedCosmos seems to take us back one step, to when the data are still incompletely sorted. In producing artistically manipulated images, it implies a more accurate glimpse of how machines truly see.

(Spotted via Martin Isaac. Earlier on BLDGBLOG: We don’t have an algorithm for this.”)

Alien Geology, Dreamed By Machines

[Image: Synthetic volcanoes modeled by Jeff Clune, from “Plug & Play Generative Networks,” via Nature].

Various teams of astronomers have been using “deep-learning neural networks” to generate realistic images of hypothetical stars and galaxies—but their work also implies that these same tools could work to model the surfaces of unknown planets. Alien geology as dreamed by machines.

The Square Kilometer Array in South Africa, for example, “will produce such vast amounts of data that its images will need to be compressed into low-noise but patchy data.” Compressing this data into readable imagery opens space for artificial intelligence to work: “Generative AI models will help to reconstruct and fill in blank parts of those data, producing the images of the sky that astronomers will examine.”

The results are thus not photographs, in other words; they are computer-generated models nonetheless considered scientifically valid for their potential insights into how regions of space are structured.

What interests me about this, though, is the fact that one of the scientists involved, Jeff Clune, uses these same algorithmic processes to generate believable imagery of terrestrial landscape features, such as volcanoes. These could then be used to model the topography of other planets, producing informed visual guesstimates of mountain ranges, ancient ocean basins, vast plains, valleys, even landscape features we might not yet have words to describe.

The notion that we would thus be seeing what AI thinks other worlds should look like—that, to view this in terms of art history, we are looking at the projective landscape paintings of machine intelligence—is a haunting one, as if discovering images of alien worlds in the daydreams of desktop computers.

(Spotted via Sean Lally; vaguely related, “We don’t have an algorithm for this”).

The Totality That Remains Invisible

[Image: Alice Aycock, “Project for Elevation with Obstructed Sight Lines” (1972)].

A few years ago, my wife and I went out to hike Breakneck Ridge when there was still a bunch of snow on the ground. It’s not, in and of itself, a hugely challenging hike, but between being ill-prepared for the slippery terrain, including a short opening scramble up snow-covered rocks, we found ourselves looking forward to the final vertical stretch before we could loop back down again to the road.

What was interesting, however, was that, from our point of view, each hill appeared to be the final one—until we got to the top of it and saw another one waiting there. Then it happened all over again: what appeared to be the final hill was actually just obstructing our view of the next one, and the next one, and the next one, and, next thing we knew, there were something like seven or eight different individual upward hikes, each hidden from view by the one leading up to it.

In 1972, earthworks artist Alice Aycock proposed a new, never-built work called “Project for Elevation with Obstructed Sight Lines.” It was part of a larger group, Aycock’s Six Semi-Architectural Projects, exhibited in 1973.

“Elevation with Obstructed Sight Lines” was meant to be a sculpted mound of earth, shaped for its optical effects.

[Image: Alice Aycock, “Project for Elevation with Obstructed Sight Lines” (1972), courtesy White Columns].

“Only one side of the resulting structure can be climbed,” Aycock wrote in her brief instructions for realizing the conceptual project. “All other side slopes are steep enough to deter climbing. The elevation of each successive climbing slope is determined by the sight lines of a 6 ft. observer so that only as the observer completes the ascent of a given slope does the next slope become visible.”

The piece obviously lends itself quite well to Kafka-esque metaphors—this structure that deliberately hides itself from view, never once perceptible in its totality but, instead, always revealing more of itself the further you go.

However, it also interestingly weds conceptual land art with hiking—that is, with embodied outdoor athleticism, rather than detached aesthetic contemplation—implying that, perhaps, trail design is an under-appreciated venue for potential conceptual art projects, where a terrain’s symbolic power only becomes clear to those engaged with hiking it.

(Aycock’s project spotted via Ends of the Earth: Land Art to 1974).

The Architecture of the Overlap

[Image: Screen grab from Sir John Soane’s Museum].

One of my favorite museums, Sir John Soane’s Museum in London, has teamed up with ScanLAB Projects for a new, 3D introduction to the Soane’s collections.

[Image: Screen grab from Sir John Soane’s Museum].

“We are using the latest in 3D technology,” the Museum explains, “to scan and digitize a wide selection of Museum rooms and objects—including Soane’s Model Room, and the ancient Egyptian sarcophagus of King Seti I.”

The opening animations alone—pulling viewers straight into the facade of the building, like a submarine passing impossibly through a luminous reef—are well worth the click.

[Image: Screen grab from Sir John Soane’s Museum].

The museum’s interior walls become translucent screens through which the rest of Soane’s home is visible. Rooms shimmer beneath other rooms, with even deeper chambers visible behind them, golden, hive-like, lit from within. Like a camera built to capture only where things overlap.

In fact, I could watch entire, feature-length films shot this way: cutting through walls, dissecting cities, forming a great narrative clockwork of action ticking away in shining blocks of space. As if the future of cinema is already here, it’s just hidden—for now—in the guise of avant-garde architectural representation.

[Image: Screen grab from Sir John Soane’s Museum].

ScanLAB’s work—such as in Rome, beneath the streets of London, or in strange new forms of portraiture—continues to have the remarkable effect of revealing every architectural space as actually existing in a state more like a cobweb.

Hallways become bridges crossing the black vacuum of space; individual rooms and galleries become unreal fogs of ornament and detail, hanging in a context of nothing.

It thus seems a perfect fit for a place as bewildering and over-stuffed as the Soane Museum, that coiling maze of archaeological artifacts and art historical cross-references, connected to itself through narrow stairways and convex mirrors.

Of course, this also begs the question of how architecture could be redesigned for maximizing the effects of this particular mode of visualization. What materials, what sequences, what placements of doors and walls would lend itself particularly well to 3-dimensional laser scanning?

The new site also includes high-res, downloadable images of the artifacts themselves—

[Images: The sarcophagus of King Seti I; courtesy Sir John Soane’s Museum].

—including Seti I’s sarcophagus, as seen above.

Click through to the Soane Museum for more.

(Elsewhere: The Dream Life of Driverless Cars).

Perspectival Objects

[Image: A perspectival representation of the “ideal city,” artist unknown].

There’s an interesting throwaway line in The Verge‘s write-up of yesterday’s Amazon phone launch, where blogger David Pierce remarks that the much-hyped public unveiling of Amazon’s so-called Fire Phone was “oddly focused on art history and perspective.”

As another post at the site points out, “Amazon CEO Jeff Bezos likened it to the move from flat artwork to artwork with geometric perspective which began in the 14th century.”

These are passing comments, sure, and, from Amazon’s side, it’s more marketing hype than anything like rigorous phenomenological theorizing. Yet there’s something strangely compelling in the idea that a seemingly gratuitous new consumer product—just another smartphone—might actually owe its allegiance to a different technical lineage, one less connected to the telecommunications industry and more from the world of architectural representation.

[Image: Jeff Bezos as perspectival historian. Courtesy of The Verge].

It would be a smartphone that takes us back to, say, Albrecht Dürer and his gridded drawing machines, making the Fire Phone a kind of perspectival object that deserves a place, however weird, in architectural history. Erwin Panofksy, we might say, would have used a Fire Phone—or at least he would have written a blog post about it.

In this context, the amazing image of billionaire Jeff Bezos standing on stage, giving a kind of off-the-cuff history of perspectival rendering surely belongs in future works of architectural history. Smiling and schoolteacher-like, Bezos gestures in front of an infinite grid ghosted-in over this seminal work of urban scenography, in one moment aiming to fit his product within a very particular, highly Western tradition of representing the built environment.

[Image: Courtesy of The Verge].

The launch of the Fire Phone did indeed give perspectival representation its due, showing how a three-dimensionally or relationally accurate perception of geometric space can change quite dramatically with only a small move of the viewer’s own head.

The phone’s “dynamic perspective,” engineered to correct this, seems a little rickety at best, but it is meant as way to account for otherwise inconsequential movements of the viewer through the landscape, whether it’s a crowded city street or the vast interiors of a hotel. To do so requires an almost comical amount of technical hand-waving. From The Verge:

The key to making dynamic perspective work is knowing exactly where the user’s head is at all times, in real time, many times per second, Bezos said. It’s something that the company has been working on for four years, and [the] best way to do it is with computer vision, he went on to note. The single, standard front-facing camera wasn’t sufficient because its field of view was too narrow—so Amazon included four additional cameras with a much wider field of view to continuously capture a user’s head. At the end of the day, it features four specialized front-facing cameras in addition to the standard front-facing camera found near the earpiece, two of which can be used in case the other cameras were covered; it uses the best two at any given time. Lastly, Amazon included infrared lights in each camera to allow the phone to work in the dark.

Five hundred years ago, we’d instead be reading about some fabulous new system of mirrors, lens, prisms, and strings, all tied back to or operated by way of complexly engineered works of geared furniture. Unfolding tables and adjustable chairs, with operable flaps and windows.

[Image: One of several perspectival objects—contraptions for producing spatially accurate drawings—by Albrecht Dürer].

These precursors of the Fire Phone, after seemingly endless acts of fine-tuning, would then, and only then, allow their users to see the scene before them with three-dimensional accuracy.

Now, replace those prisms and mirrors with multiple forward-facing cameras and infrared sensors, and market the resulting object to billions of potential users in front of gridded scenes of Western urbanism, and you’ve got the strange moment that happened yesterday, where a smartphone aimed to collapse all of Western art history into a single technical artifact, a perspectival object many of us will soon be carrying in our bags and pockets.

[Image: Another “ideal city,” artist unknown].

More interestingly, though, with its odd focus “on art history and perspective,” Amazon’s event raises the question of how electronic mediation of the built environment might be affecting how our cities are designed in the first place—how we see buildings, streets, and cities through the dynamic lens of automatic perspective correction and other visual algorithms.

Put another way, is there a type of architecture—Classical, Romanesque—particularly well-suited for perspectival objects like the Fire Phone, and, conversely, are there types of built space that throw these devices off altogether? Further, could artificial environments that exceed the rendering capacity of smartphones and other digital cameras be deliberately designed—and, if so, what would they “look like” to those sensors and objects?

Recall that, at one point in his demonstration, Bezos explained how Amazon’s new interface “uses different layers to hide and show information on the map like Yelp reviews,” effectively tagging works of architecture with digital metadata in a kind of Augmented Reality Lite.

But what this suggests, together with Bezos’s use of “ideal city” imagery, is that smartphone urbanism will have its own peculiar stylistic needs. Perhaps, if visually defined, that will mean that phones will require cities to be gridded and legible, with clear spatial differentiation between buildings and objects in order to function most accurately—in order to line up with the clouds of virtual tags we will soon be placing all over the structures around us. Perhaps, if more GPS-defined, that will mean overlapping buildings and spaces are just fine, but they nonetheless must allow unblocked access to satellite signals above so that things don’t get confused down at street level—a kind of celestial perspectivism where, from the phone’s point of view, the roof is the new facade, the actual “front” of the building through which vital navigational signals must travel.

Either way, the possibility that there is a particular type of space, or a particular type of urbanism, most suited to the perspectival needs of new smartphones is totally fascinating. Perhaps in retrospect, this photograph of Jeff Bezos, grinning at the world in front of a gigantic image of Western perspective, will become a canonical architectural image of where digital objects and urban design intersect.