[Image: The infrastructure of bullet time].
A digital image-processing system under development since 2007 will allow photographers “to artificially create photos taken from a perspective where there was no photographer.” It uses “a computer-vision technique called view synthesis to combine two or more photographs to create another very realistic-looking one that looks like it was taken from an arbitrary viewpoint,” as New Scientist explains.
One expert quoted refers to this as “anonymizing the photographer.”
The images can come from more than one source: what’s important is that they are taken at around the same time of a reasonably static scene from different viewing angles. Software then examines the pictures and generates a 3D “depth map” of the scene. Next, the user chooses an arbitrary viewing angle for a photo they want to post online.
The photo then goes through a “dewarping” stage, in which straight lines like walls and kerb angles are corrected for the new point of view, and “hole filling,” in which nearby pixels are copied to fill in gaps in the image created because some original elements were obscured.
While the article rightly emphasizes the political implications of this—writing that the technology “could help protestors in repressive regimes escape arrest—and give journalists ‘plausible deniability’ over the provenance of leaked photos”—there are, of course, other possibilities inherent in the technique that seem worth exploring. These include virtualizing photographs taken of a landscape, building, person, or city, producing views, angles, and perspectives never actually seen by human beings. This would be like something out of the work of Piranesi—specifically as interpreted by Manfredo Tafuri in The Sphere and the Labyrinth—in which impossible scenes overlap to produce a single, far from comprehensive spatial reality.
Perhaps some editor somewhere could send Iwan Baan and Fernando Guerra out to shoot a new building together, then “hole fill” their images to create a virtual, third photographer. Every image thus published in the resulting article documents a viewpoint neither photographer either experienced or saw. It is the building as seen by no one, virtually extruded from otherwise real-world photographs.
To throw another gratuitous theory reference out there, it’s like Foucault’s analysis of “Las Meninas” in The Order of Things, where we read that the painter may or may not have included an obscured vantage point from which his painting was supposedly painted. To translate Foucault’s hypothesis into New Scientist‘s terms, this would be “location privacy,” that is, “a way of disguising the photographer’s viewpoint.”
[Image: “Las Meninas” by Diego Velázquez].
Or, imagine, for instance, an entire film assembled from “dewarped” images—intermediary, falsified frames precipitated out from between the cameras—creating an uncanny motion picture of interstitial imagery. Virtual films between films; films recombined to create a third cinema of gaps; virtual still images taken from virtual films, overlaid and dewarped to form fourth and fifth and sixth films generationally removed from the original, in an infinite splintering of derivative film stills. We won’t document the world as everyone sees it; we’ll document it from places where no one’s ever been.
(Thanks to Luke Fidler for the tip).
I’ve not read Foucault’s text on Las Meninas so I can’t say how relevant the following link is but Román Cortés has made an interesting experimental version of the painting here: CSS 3D Meninas.
Add Lytro cameras for another level of possibilities:
http://lomokev.com/blog/lytro-make-focusing-a-thing-of-the-past/