Speaking of animals being actively incorporated into urban infrastructure, Dutch police are training eagles to hunt drones. “What I find fascinating is that birds can hit the drone in such a way that they don’t get injured by the rotors,” explains a spokesperson for the National Audubon Society. “They seem to be whacking the drone right in the center so they don’t get hit; they have incredible visual acuity and they can probably actually see the rotors.”
Tag: Technology
Wearable Furniture, Portable Rooms
[Image: Archelis via the Tech Times].
“Japanese researchers have developed a wearable chair called Archelis that can help surgeons when they are performing long surgeries,” the Tech Times explains.
At first glance Archelis does not look like a chair at all. The wearable chair looks more like a leg brace. The wearer of Archelis will not get full comfort of sitting on a chair but the gadget actually wraps around the wearer’s buttocks and legs, providing support that effectively allows them to sit down wherever and whenever needed.
The developers of Archelis suggest that even though the chair is targeted for surgeons performing long surgeries, it can be used by anyone in fields that require a lot of standing. Moreover, the chair may also assist people who have to sit briefly after walking for a while.
Your leg braces, in other words, convert into furniture, as seen in the video below.
While this is already interesting, of course, the artistic and even architectural implications are pretty fascinating, with clear applications outside the realm of surgery. Crowds as coordinated super-furniture. A choreography of linked braces forming structural chains and portable rooms.
Give it a few years—and then why design and build certain types of furniture at all, when people can simply wear them? What would this do to how architects frame space?
Until that day, read more at the Tech Times.
(Spotted via @curiousoctopus).
In the Garden of 3D Printers
[Image: Unrelated image of incredible floral shapes 3D-printed by Jessica Rosenkrantz and Jesse Louis-Rosenberg (via)].
A story published earlier this year explained how pollinating insects could be studied by way of 3D-printed flowers.
The actual target of the study was the hawkmoth, and four types of flowers were designed and produced to help understand the geometry of moth/flower interactions, including how “the hawkmoth responded to each of the flower shapes” and “how the flower shape affected the ability of the moth to use its proboscis (the long tube it uses as a mouth).”
Of course, a very similar experiment could have been done using handmade model flowers—not 3D printers—and thus could also have been performed with little fanfare generations ago.
But the idea that a surrogate landscape can now be so accurately designed and manufactured by printheads that it can be put into service specifically for the purpose of cross-species dissimulation—that it, tricking species other than humans into thinking that these flowers are part of a natural ecosystem—is extraordinary.
[Image: An also unrelated project called “Blossom,” by Richard Clarkson].
Many, many years ago, I was sitting in a park in Providence, Rhode Island, one afternoon reading a copy of Germinal Life by Keith Ansell Pearson. The book had a large printed flower on its front cover, wrapping over onto the book’s spine.
Incredibly, at one point in the afternoon a small bee seemed to become confused by the image, as the bee kept returning over and over again to land on the spine and crawl around there—which, of course, might have had absolutely nothing to do with the image of a printed flower, but, considering the subject matter of Ansell Pearson’s book, this was not without significant irony.
It was as if the book itself had become a participant in, or even the mediator of, a temporary human/bee ecosystem, an indirect assemblage created by this image, this surrogate flower.
In any case, the image of little gardens or entire, wild landscapes of 3D-printed flowers so detailed they appear to be organic brought me to look a little further into the work of Jessica Rosenkrantz and Jesse Louis-Rosenberg, a few pieces of whose you can see in the opening image at the top of this post.
Their 3D-printed floral and coral forms are astonishing.
[Image: “hyphae 3D 1” by Jessica Rosenkrantz and Jesse Louis-Rosenberg].
Rosenkrantz’s Flickr page gives as clear an indication as anything of what their formal interests and influences are: photos of coral, lichen, moss, mushrooms, and wildflowers pop up around shots of 3D-printed models.
They sometimes blend in so well, they appear to be living specimens.
[Image: Spot the model; from Jessica Rosenkrantz’s Flickr page].
There is an attention to accuracy and detail in each piece that is obvious at first glance, but that is also made even more clear when you see the sorts of growth-studies they perform to understand how these sorts of systems branch and expand through space.
[Image: “Floraform—Splitting Point Growth” by Jessica Rosenkrantz and Jesse Louis-Rosenberg].
The organism as space-filling device.
And the detail itself is jaw-dropping. The following shot shows how crazy-ornate these things can get.
[Image: “Hyphae spiral” by Jessica Rosenkrantz and Jesse Louis-Rosenberg].
Anyway, while this work is not, of course, related to the hawkmoth study with which this post began, it’s nonetheless pretty easy to get excited about the scientific and aesthetic possibilities opened up by some entirely speculative future collaboration between these sorts of 3D-printed models and laboratory-based ecological research.
One day, you receive a mysterious invitation to visit a small glass atrium constructed atop an old warehouse somewhere on the outskirts of New York City. You arrive, baffled as to what it is you’re meant to see, when you notice, even from a great distance, that the room is alive with small colorful shapes, flickering around what appears to be a field of delicate flowers. As you approach the atrium, someone opens a door for you and you step inside, silent, slightly stunned, noticing that there is life everywhere: there are lichens, orchids, creeping vines, and wildflowers, even cacti and what appears to be a coral reef somehow inexplicably growing on dry land.
But the room does not smell like a garden; the air instead is charged with a light perfume of adhesives.
[Image: “Hyphae crispata #1 (detail)” by Jessica Rosenkrantz and Jesse Louis-Rosenberg].
Everything you see has been 3D-printed, which comes as a shock as you begin to see tiny insects flittering from flowerhead to flowerhead, buzzing through laceworks of creeping vines and moss—until you look even more carefully and realize that they, too, have been 3D-printed, that everything in this beautiful, technicolor room is artificial, and that the person standing quietly at the other end amidst a tangle of replicant vegetation is not a gardener at all but a geometrician, watching for your reaction to this most recent work.
Cereal Bags of the Stratosphere
[Image: One of Google’ Loon balloons; screen grab from this video].
“The lab is 250 feet wide, 200 feet deep, and 70 feet tall. It’s a massive space where Google’s scientists can simulate the negative-60 degrees Celsius temperature of the stratosphere.” Alexis Madrigal on Google’s Project Loon balloons.
The future of the internet is cereal bag technology in the sky.
Electronic Plantlife
[Image: A rose-circuit, courtesy Linköping University].
In a newly published paper called “Electronic plants,” researchers from Linköping University in Sweden describe the process by which they were able to “manufacture” what they call “analog and digital organic electronic circuits and devices” inside living plants.
The plants not only conducted electrical signals, but, as Science News points, the team also “induced roses leaves to light up and change color.”
Indeed, in their way of thinking, plants have been electronic gadgets all along: “The roots, stems, leaves, and vascular circuitry of higher plants are responsible for conveying the chemical signals that regulate growth and functions. From a certain perspective, these features are analogous to the contacts, interconnections, devices, and wires of discrete and integrated electronic circuits.”
[Image: Bioluminescent foxfire mushrooms (used purely for illustrative effect), via Wikipedia].
Here’s the process in a nutshell:
The idea of putting electronics directly into trees for the paper industry originated in the 1990s while the LOE team at Linköping University was researching printed electronics on paper. Early efforts to introduce electronics in plants were attempted by Assistant Professor Daniel Simon, leader of the LOE’s bioelectronics team, and Professor Xavier Crispin, leader of the LOE’s solid-state device team, but a lack of funding from skeptical investors halted these projects.
Thanks to independent research money from the Knut and Alice Wallenberg Foundation in 2012, Professor Berggren was able to assemble a team of researchers to reboot the project. The team tried many attempts of introducing conductive polymers through rose stems. Only one polymer, called PEDOT-S, synthesized by Dr. Roger Gabrielsson, successfully assembled itself inside the xylem channels as conducting wires, while still allowing the transport of water and nutrients. Dr. Eleni Stavrinidou used the material to create long (10 cm) wires in the xylem channels of the rose. By combining the wires with the electrolyte that surrounds these channels she was able to create an electrochemical transistor, a transistor that converts ionic signals to electronic output. Using the xylem transistors she also demonstrated digital logic gate function.
Headily enough, using plantlife as a logic gate also implies a future computational use of vegetation: living supercomputers producing their own circuits inside dual-use stems.
Previously, we have looked at the use of electricity to stimulate plants into producing certain chemicals, how the action of plant roots growing through soil could be tapped as a future source of power, and how soil bacteria could be wired up into huge, living battery fields—in fact, we also looked at a tongue-in-cheek design project for “growing electrical circuitry inside the trunks of living trees“—but this actually turns vegetation into a form of living circuitry.
While Archigram’s “Logplug” project is an obvious reference point here within the world of architectural design, it seems more interesting to consider instead the future landscape design implications of technological advances such as this—how “electronic plants” might affect everything from forestry to home gardening, energy production and distribution infrastructure to a city’s lighting grid.
[Image: The “Logplug” by Archigram, from Archigram].
We looked at this latter possibility several few years ago, in fact, in a post from 2009 called “The Bioluminescent Metropolis,” where the first comment now seems both prescient and somewhat sad given later developments.
But the possibilities here go beyond mere bioluminescence, into someday fully functioning electronic vegetation.
Plants could be used as interactive displays—recall the roses “induced… to light up and change color”—as well as given larger conductive roles in a region’s electrical grid. Imagine storing excess electricity from a solar power plant inside shining rose gardens, or the ability to bypass fallen power lines after a thunderstorm by re-routing a town’s electrical supply through the landscape itself, living corridors wired from within by self-assembling circuits and transistors.
And, of course, that’s all in addition to the possibility of cultivating plants specifically for their use as manufacturing systems for organic electronics—for example, cracking them open not to reveal nuts, seeds, or other consumable protein, but the flexible circuits of living computer networks. BioRAM.
There are obvious reasons to hesitate before realizing such a vision—that is, before charging headlong into a future world where forests are treated merely as back-up lighting plans for overcrowded cities and plants of every kind are seen as nothing but wildlife-disrupting sources of light cultivated for the throwaway value of human aesthetic pleasure.
Nonetheless, thinking through the design possibilities in addition to the ethical risks not only now seems very necessary, but might also lead someplace truly extraordinary—or someplace otherworldly, we might say with no need for justification.
For now, check out the original research paper over at Science Advances.
Extract
[Image: By Spiros Hadjidjanos, via Contemporary Art Daily].
Artist Spiros Hadjidjanos has been using an interesting technique in his recent work, where he scans old photographs, turns their color or shading intensity into depth information, and then 3D-prints objects extracted from this. The effect is like pulling objects out of wormholes.
[Image: By Spiros Hadjidjanos, via Contemporary Art Daily].
His experiments appear to have begun with a project focused specifically on Karl Blossfeldt’s classic book Urformen der Kunst; there, Blossfeldt published beautifully realized botanical photographs that fell somewhere between scientific taxonomy and human portraiture.
[Image: By Spiros Hadjidjanos, via Stylemag].
As Hi-Fructose explained earlier this summer, Hadjidjanos’s approach was to scan Blossfeldt’s images, then, “using complex information algorithms to add depth, [they] were printed as objects composed of hundreds of sharp needle-like aluminum-nylon points. Despite their space-age methods, the plants appear fossilized. Each node and vein is perfectly preserved for posterity.”
[Image: Via Spiros Hadjidjanos’s Instagram feed].
The results are pretty awesome—but I was especially drawn to this when I saw, on Hadjidjanos’s Instagram feed, that he had started to apply this to architectural motifs.
2D architectural images—scanned and translated into operable depth information—can then be realized as blurred and imperfect 3D objects, spectral secondary reproductions that are almost like digitally compressed, 3D versions of the original photograph.
[Image: Via Spiros Hadjidjanos’s Instagram feed].
It’s a deliberately lo-fi, representationally imperfect way of bringing architectural fragments back to life, as if unpeeling partial buildings from the crumbling pages of a library, a digital wizardry of extracting space from surface.
[Image: Via Spiros Hadjidjanos’s Instagram feed].
There are many, many interesting things to discuss here—including three-dimensional data loss, object translations, and emerging aesthetics unique to scanning technology—but what particularly stands out to me is the implication that this is, in effect, photography pursued by other means.
In other words, through a combination of digital scanning and 3D-printing, these strange metallized nylon hybrids, depicting plinths, entablatures, finials, and other spatial details, are just a kind of depth photography, object-photographs that, when hung on a wall, become functionally indistinct from architecture.
Ghosting
[Image: From Pierre Huyghe, “Les grandes ensembles” (2001)].
A short news items in New Scientist this week describes the work of University of Michigan engineers who have developed a way to, in effect, synchronize architectural structures at a distance. They refer to this as “ghosting”:
When someone turns the lights on in one kitchen, they automatically switch on in the connected house. Sounds are picked up and relayed, too. Engineers at the University of Michigan successfully linked an apartment in Michigan with one in Maryland. The work was presented at the IoT-App conference in Seoul, South Korea, last week.
I haven’t found any more details about the project—including why, exactly, one would want to do this, other than perhaps to create some strange new electrical variation on “The Picture of Dorian Gray,” where a secret reference-apartment is kept burning away somewhere in the American night—but no doubt more info will come to light soon.*
*Update: Such as right now: here is the original paper. There, we read the following:
Ghosting synchronizes audio and lighting between two homes on a room-by-room basis. Microphones in each room transmit audio to the corresponding room in the other home, unifying the ambient sound domains of the two homes. For example, a user cooking in their kitchen transmits sounds out of speakers in the other user’s own kitchen. The lighting context in corresponding rooms is also synchronized. A light toggled in one house toggles the lights in the other house in real time. We claim that this system allows for casual interactions that feel natural and intimate because they share context and require less social effort than a teleconference or phone call.
Thanks to Nick Arvin, both for finding the paper and for highlighting that particular quotation.
Liquid Quarries and Reefs On Demand
[Image: Micromotors at work, via UCSD/ScienceDaily].
Tiny machines that can extract carbon dioxide from water might someday help deacidify the oceans, according to a press release put out last week by UCSD.
Described as “micromotors,” the devices “are essentially six-micrometer-long tubes that help rapidly convert carbon dioxide into calcium carbonate, a solid mineral found in eggshells, the shells of various marine organisms, calcium supplements and cement.”
While these are still just prototypes, and are far from ready actually to use anywhere in the wild, they appear to have proven remarkably effective in the lab:
In their experiments, nanoengineers demonstrated that the micromotors rapidly decarbonated water solutions that were saturated with carbon dioxide. Within five minutes, the micromotors removed 90 percent of the carbon dioxide from a solution of deionized water. The micromotors were just as effective in a sea water solution and removed 88 percent of the carbon dioxide in the same timeframe.
The implications of this for marine life are obviously pretty huge—after all, overly acidic waters mean that shells are difficult, if not impossible, to form, so these devices could have an enormously positive effect on sea life—but these devices could also be hugely useful in the creation of marine limestone.
As UCSD scientists explain, the micromotors would “rapidly zoom around in water, remove carbon dioxide and convert it into a usable solid form.” A cloud of these machines could thus essentially precipitate the basic ingredients of future rocks from open water.
[Image: A Maltese limestone quarry, via Wikipedia].
At least two possibilities seem worth mentioning.
One is the creation of a kind of liquid quarry out of which solid rock could be extracted—a square mile or two of seawater where a slurry of calcium carbonate would snow down continuously, 24 hours a day, from the endless churning of invisible machines. Screen off a region of the coast somewhere, so that no fish can be harmed, then trawl those hazy waters for the raw materials of future rock, later to be cut, stacked, and sold for dry-land construction.
The other would be the possibility of, in effect, the large-scale depositional printing of new artificial reefs. Set loose these micromotors in what would appear to be a large, building-sized teabag that you slowly drag through the ocean waters, and new underwater landforms slowly accrete in its week. Given weeks, months, years, and you’ve effectively 3D-printed a series of new reefs, perfect for coastal protection, a new marine sanctuary, or even just a tourist site.
In any case, read more about the actual process over at UCSD or ScienceDaily.
Subterranean Lightning Brigade
[Image: “Riggers install a lightning rod” atop the Empire State Building “in preparation for an investigation into lightning by scientists of the General Electric Company” (1947), via the Library of Congress].
This is hardly news, but I wanted to post about the use of artificial lightning as a navigational aid for subterranean military operations.
This was reported at the time as a project whose goal was “to let troops navigate about inside huge underground enemy tunnel complexes by measuring energy pulses given off by lightning bolts,” where those lightning bolts could potentially be generated on-demand by aboveground tactical strike teams.
Such a system would replace the use of GPS—whose signals cannot penetrate into deep subterranean spaces—and it would operate by way of sferics, or radio atmospheric signals generated by electrical activity in the sky.
The proposed underground navigational system—known as “Sferics-Based Underground Geolocation” or S-BUG—would be capable of picking up these signals even from “hundreds of miles away. Receiving signals from lighting strikes in multiple directions, along with minimal information from a surface base station also at a distance, could allow operators to accurately pinpoint their position.” They could thus maneuver underground, even in hundreds—thousands—of feet below the earth’s surface in enemy caves or bunkers.
Hundreds of miles is a very wide range, of course—but what if there is no natural lightning in the area?
Enter artificial military storm generators, or the charge of the lightning brigade.
Back in 2009, DARPA also put out of a request for proposals as part of something called Project Nimbus. NIMBUS is “a fundamental science program focused on obtaining a comprehensive understanding of the lightning process.” However, it included a specific interest in developing machines for “triggering lightning”:
Experimental Set-up for Triggering Lightning: Bidders should fully describe how they would attempt to trigger lightning and list all potential pieces of equipment necessary to trigger lightning, as well as the equipment necessary to measure and characterize the processes governing lightning initiation, propagation, and attachment.
While it’s easy enough to wax conspiratorial here about future lightning weapons or militarized storm cells—after all, DARPA themselves write that they want to understand “how [lightning] ties into the global charging circuit,” as if “the global charging circuit” is something that could be instrumentalized or controlled—I actually find it more interesting to speculate that generating lightning would be not for offensive purposes at all, but for guiding underground navigation.
[Image: Lightning storm over Boston; via Wikimedia/NOAA].
Something akin to a strobe light begins pulsing atop a small camp of unmarked military vehicles parked far outside a desert city known for its insurgent activities. These flashes gradual lengthen, both temporally and physically, lasting longer and stretching upward into the sky; the clouds above are beginning to thicken, grumbling with quiet rolls of thunder.
Then the lightning strikes begin—but they’re unlike any natural lightning you’ve ever seen. They’re more like pops of static electricity—a pulsing halo or toroidal crown of light centered on the caravan of trucks below—and they seem carefully timed.
To defensive spotters watching them through binoculars in the city, it’s obvious what this means: there must be a team of soldiers underground somewhere, using artificial sferics to navigate. They must be pushing forward relentlessly through the sewers and smuggling tunnels, crawling around the roots of buildings and maneuvering through the mazework of infrastructure that constitutes the city’s underside, locating themselves by way of these rhythmic flashes of false lightning.
Of course, this equipment would eventually be de-militarized and handed down to the civilian sector, in which case you can imagine four friends leaving REI on a Friday afternoon after work with an artificial lightning generator split between them; no larger than a camp stove, it would eventually be set up with their other weekend caving equipment, used to help navigate through deep, stream-slick caves an hour and a half outside town, beneath tall mountains where GPS can’t always be trusted.
Or, perhaps fifty years from now, salvage teams are sent deep into the flooded cities of the eastern seaboard to look for and retrieve valuable industrial equipment. They install an artificial lightning unit on the salt-bleached roof of a crumbling Brooklyn warehouse before heading off in a small armada of marsh boats, looking for entrances to old maintenance facilities whose basement storage rooms might have survived rapid sea-level rise.
Disappearing down into these lost rooms—like explorers of Egyptian tombs—they are guided by bolts of artificial lightning that spark upward above the ruins, reflected by tides.
[Image: Lightning via NOAA].
Or—why not?—perhaps we’ll send a DARPA-funded lightning unit to one of the moons of Jupiter and let it flash and strobe there for as long as it needs. Called Project Miller-Urey, its aim is to catalyze life from the prebiotic, primordial soup of chemistry swirling around there in the Cthulhoid shadow of eternal ice mountains.
Millions and millions of years hence, proto-intelligent lifeforms emerge, never once guessing that they are, in fact, indirect descendants of artificial lightning technology. Their spark is not divine but military, the electrical equipment that sparked their ancestral line long since fallen into oblivion.
In any case, keep your eyes—and cameras—posted for artificial lightning strikes coming to a future military theater near you…
Joyful Rendezvous Upon Pure Ice and Snow
[Image: Snow-making equipment via Wikipedia].
The 2022 Winter Olympics in Beijing are something of a moonshot moment for artificial snow-making technology: the winter games will be held “in a place with no snow.” That’s right: “the 2022 Olympics will rely entirely on artificial snow.”
As a report released by the International Olympic Committee admits, “The Zhangjiakou and Yanqing Zones have minimal annual snowfall and for the Games would rely completely on artificial snow. There would be no opportunity to haul snow from higher elevations for contingency maintenance to the racecourses so a contingency plan would rely on stockpiled man-made snow.”
This gives new meaning to the word snowbank: a stock-piled reserve of artificial landscape effects, an archive of on-demand, readymade topography.
Beijing’s slogan for their Olympic bid? “Joyful Rendezvous upon Pure Ice and Snow.”
[Image: Snow-making equipment via Wikipedia].
Purely in terms of energy infrastructure and freshwater demand—most of the water will be pumped in from existing reservoirs—the 2022 winter games will seemingly be unparalleled in terms of their sheer unsustainability. Even the IOC sees this; from their report:
The Commission considers Beijing 2022 has underestimated the amount of water that would be needed for snowmaking for the Games but believes adequate water for Games needs could be supplied.
In addition, the Commission is of the opinion that Beijing 2022 has overestimated the ability to recapture water used for snowmaking. These factors should be carefully considered in determining the legacy plans for snow venues.
Knowing all this, then, why not be truly radical—why not host the winter games in Florida’s forthcoming “snowball fight arena,” part of “a $309 million resort near Kissimmee that would include 14-story ski and snowboard mountain, an indoor/outdoor skateboard park and a snowball fight arena”?
Why not host them in Manaus?
Interestingly, the IOC also raises the question of the Games’ aesthetics, warning that the venues might not really look like winter.
“Due to the lack of natural snow,” we read, “the ‘look’ of the venue may not be aesthetically pleasing either side of the ski run. However, assuming sufficient snow has been made or stockpiled and that the temperature remains cold, this should not impact the sport during the Games.”
Elsewhere: “There could be no snow outside of the racecourse, especially in Yanqing, impacting the visual perception of the snow sports setting.” This basically means that there will be lots of bare ground, rocks, and gravel lining the virginal white strips of these future ski runs.
[Image: Ski jumping in summer at Chicago’s Soldier Field (1954); via Pruned].
Several years ago, Pruned satirically offered Chicago as a venue for the world’s “first wholly urban Winter Olympics.” With admirable detail, he went into many of the specifics for how Chicago might pull it off, but he also points out the potential aesthetic disorientation presented by seeing winter sports in a non-idyllic landscape setting.
“Chicago’s gritty landscape shouldn’t be much of a handicap,” he suggests. Chicago might not “embody a certain sort of nature—rustic mountains, pastoral evergreen forests, a lonely goatherd, etc.,” but the embedded landscape technology of the Winter Games should have left behind that antiquated Romanticism long ago.
As Pruned asks, “have the more traditional Winter Olympic sites not been over the years transformed into high-tech event landscapes, carefully managed and augmented with artificial snow and heavy plows that sculpt the slopes to a pre-programmed set of topographical parameters?”
Seen this way, Beijing’s snowless winter games are just an unsustainable historical trajectory taken to its most obvious next step.
[Image: Making snow for It’s A Wonderful Life, via vintage everyday].
In any case, the 2022 Winter Olympics are shaping up to be something like an Apollo Program for fake snow, an industry that, over the next seven years, seems poised to experience a surge of innovation as the unveiling of this most artificial of Olympic landscapes approaches.
This Is Only A Test
[Image: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].
Photographer Daniel Stier has a new book out, and an accompanying exhibition on display at the kulturreich gallery, called Ways of Knowing.
Skier’s photos depict human subjects immersed in, or even at the mercy of, spatial instrumentation: strange devices conducting experiments that function at the scale of architecture but whose purpose remains unidentified.
[Image: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].
In Stier’s words, the overall series is “a personal project exploring the real world of scientific research. Not the stainless steel surfaces bathed in purple light, but real people in their basements working on selfbuilt contraptions. All shot in state of the art research institutions across Europe and the US, showing experiments with human subjects. Portrayed are the people conducting the experiments—students, doctorands and professors.”
In recent interviews discussing the book, Stier has pointed out what he calls “similarities between artistic and scientific work,” with an emphasis on the craft that goes into designing and executing these devices.
However, there is also a performative or aesthetic aspect to many of these that hints at resonances beyond the world of applied science—a person staring into multicolored constellations painted on the inside of an inverted bowl, for example.
[Image: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].
Ostensibly an ophthalmic device of some kind, it could just as easily be an amateur’s attempt at OpArt.
In a sense, these are not just one-off scientific experiments but spatial prototypes: rigorous attempts at building and establishing a very particular kind of environment—a carefully calibrated and tuned zone of parameters, forces, and influences—then exposing people to those worlds as a means of testing for their effects.
[Image: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].
In any case, here are a few more images to pique your curiosity, but many, many more photos are available in Stier’s book, which just began shipping this month, and, of course, over at Stier’s website.
[Images: From Ways of Knowing by Daniel Stier, on display at the kulturreich gallery].
(Originally spotted via New Scientist).