Computational Ornament

[Image: From “Harnessing Vision For Computation” by Mark Changizi].

A few billion years ago, back in July 2008, Alexis Madrigal blogged about the design of “visual circuitry” for Wired. “A cognitive scientist wants to employ M.C. Escher’s bag of optical tricks to get your eyes to solve logic problems,” Madrigal wrote at the time, referring to the work of Mark Changizi.

Changizi’s idea, as Madrigal explained, was that “human beings can use their brain’s visual-processing abilities to solve LSAT-style logic puzzles, simply by staring at images designed to get their eyes to compute. Because this form of visual processing feels so effortless, such problems might be much easier to solve than their written counterparts.”

[Image: From “Harnessing Vision For Computation” by Mark Changizi].

These visually processed logic puzzles rely on a new form of writing, in effect, one that uses not traditional letters or typography but geometric shapes specifically angled and shaded to create optical illusions; each version of the illusion, so to speak, carries a different meaning. A whole visual grammar can thus be created, Changizi suggests.

You can read Wired—or, of course, Changizi’s own paper, “Harnessing Vision For Computation”—to understand how the system really works, but what interests me here is the possibility that designers could take a visual/computational language such as this and extrapolate a new style of architectural ornament from it.

[Image: From Geometrical Objects: Architecture and the Mathematical Sciences, 1400-1800, edited by Anthony Gerbino].

In other words, you could transform Changizi’s visual circuitry into a system of 3-dimensional architectural details that could be designed to sharpen and stimulate human cognitive abilities. Instead of playing sudoku, you and your elderly relatives could just look at the fronts of buildings and watch as waning daylight changes the shapes and angles of shadows, working out the logical implications.

At 10am, your building’s facade says one thing; at 6pm, because the shadows have shifted—that is, the Changizian circuits are now closing differently—it says something else entirely.

[Image: From Geometrical Objects: Architecture and the Mathematical Sciences, 1400-1800, edited by Anthony Gerbino].

Architecture becomes a passive cognitive environment, a logical stimulant, an object-based grammar meant to keep its inhabitants’ brains more supple.

[Image: From “Harnessing Vision For Computation” by Mark Changizi].

Whether or not this is possible or just hand-wavey bullshit, I’m totally fascinated by the idea that you could use cognitive science to design a new class of architectural ornament—not just geometry for the sake of geometry, or statuary for the sake of historical narratives, but a spur toward cognitive health in the people who gaze upon it.

The Ghost of Cognition Past, or Thinking Like An Algorithm

[Image: Wiring the ENIAC; via Wired]

One of many things I love about writing—that is, engaging in writing as an activity—is how it facilitates a discovery of connections between otherwise unrelated things. Writing reveals and even relies upon analogies, metaphors, and unexpected similarities: there is resonance between a story in the news and a medieval European folktale, say, or between a photo taken in a war-wrecked city and an 18th-century landscape painting. These sorts of relations might remain dormant or unnoticed until writing brings them to the foreground: previously unconnected topics and themes begin to interact, developing meanings not present in those original subjects on their own.

Wildfires burning in the Arctic might bring to mind infernal images from Paradise Lost or even intimations of an unwritten J.G. Ballard novel, pushing a simple tale of natural disaster to new symbolic heights, something mythic and larger than the story at hand. Learning that U.S. Naval researchers on the Gulf Coast have used the marine slime of a “300-million-year old creature” to develop 21st-century body armor might conjure images from classical mythology or even from H.P. Lovecraft: Neptunian biotech wed with Cthulhoid military terror.

In other words, writing means that one thing can be crosswired or brought into contrast with another for the specific purpose of fueling further imaginative connections, new themes to be pulled apart and lengthened, teased out to form plots, characters, and scenes.

In addition, a writer of fiction might stage an otherwise straightforward storyline in an unexpected setting, in order to reveal something new about both. It’s a hard-boiled detective thriller—set on an international space station. It’s a heist film—set at the bottom of the sea. It’s a procedural missing-person mystery—set on a remote military base in Afghanistan.

Thinking like a writer would mean asking why things have happened in this way and not another—in this place and not another—and to see what happens when you begin to switch things around. It’s about strategic recombination.

I mention all this after reading a new essay by artist and critic James Bridle about algorithmic content generation as seen in children’s videos on YouTube. The piece is worth reading for yourself, but I wanted to highlight a few things here.

[Image: Wiring the ENIAC; via Wired]

In brief, the essay suggests that an increasingly odd, even nonsensical subcategory of children’s video is emerging on YouTube. The content of these videos, Bridle writes, comes from what he calls “keyword/hashtag association.” That is, popular keyword searches have become a stimulus for producing new videos whose content is reverse-engineered from those searches.

To use an entirely fictional example of what this means, let’s imagine that, following a popular Saturday Night Live sketch, millions of people begin Googling “Pokémon Go Ewan McGregor.” In the emerging YouTube media ecology that Bridle documents, someone with an entrepreneurial spirit would immediately make a Pokémon Go video featuring Ewan McGregor both to satisfy this peculiar cultural urge and to profit from the anticipated traffic.

Content-generation through keyword mixing is “a whole dark art unto itself,” Bridle suggests. As a particular keyword or hashtag begins to trend, “content producers pile onto it, creating thousands and thousands more of these videos in every possible iteration.” Imagine Ewan McGregor playing Pokémon Go, forever.

What’s unusual here, however, and what Bridle specifically highlights in his essay, is that this creative process is becoming automated: machine-learning algorithms are taking note of trending keyword searches or popular hashtag combinations, then recommending the production of content to match those otherwise arbitrary sets. For Bridle, the results verge on the incomprehensible—less Big Data, say, than Big Dada.

This is by no means new. Recall the origin of House of Cards on Netflix. Netflix learned from its massive trove of consumer data that its customers liked, among other things, David Fincher films, political thrillers, and the actor Kevin Spacey. As David Carr explained for the New York Times back in 2013, this suggested the outline of a possible series: “With those three circles of interest, Netflix was able to find a Venn diagram intersection that suggested that buying the series would be a very good bet on original programming.”

In other words, House of Cards was produced because it matched a data set, an example of “keyword/hashtag association” becoming video.

The question here would be: what if, instead of a human producer, a machine-learning algorithm had been tasked with analyzing Netflix consumer data and generating an idea for a new TV show? What if that recommendation algorithm didn’t quite understand which combinations would be good or worth watching? It’s not hard to imagine an unwatchably surreal, even uncanny television show resulting from this, something that seems to make more sense as a data-collection exercise than as a coherent plot—yet Bridle suggests that this is exactly what’s happening in the world of children’s videos online.

[Image: From Metropolis].

In some of these videos, Bridle explains, keyword-based programming might mean something as basic as altering a few words in a script, then having actors playfully act out those new scenarios. Actors might incorporate new toys, new types of candy, or even a particular child’s name: “Matt” on a “donkey” at “the zoo” becomes “Matt” on a “horse” at “the zoo” becomes “Carla” on a “horse” at “home.” Each variant keyword combination then results in its own short video, and each of these videos can be monetized. Future such recombinations are infinite.

In an age of easily produced digital animations, Bridle adds, these sorts of keyword micro-variants can be produced both extremely quickly and very nearly automatically. Some YouTube producers have even eliminated “human actors” altogether, he writes, “to create infinite reconfigurable versions of the same videos over and over again. What is occurring here is clearly automated. Stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos.”

Bridle notes with worry that it is nearly impossible here “to parse out the gap between human and machine.”

Going further, he suggests that the automated production of new videos based on popular search terms has resulted in scenes so troubling that children should not be exposed to them—but, interestingly, Bridle’s reaction here seems to be based on those videos’ content. That is, the videos feature animated characters appearing without heads, or kids being buried alive in sandboxes, or even the painful sounds of babies crying.

What I think is unsettling here is slightly different, on the other hand. The content, in my opinion, is simply strange: a kind of low-rent surrealism for kids, David Lynch-lite for toddlers. For thousands of years, western folktales have featured cannibals, incest, haunted houses, even John Carpenter-like biological transformations, from woman to tree, or from man to pig and back again. Children burn to death on chariots in the sky or sons fall from atmospheric heights into the sea. These myths seem more nightmarish—on the level of content—than some of Bridle’s chosen YouTube videos.

Instead, I would argue, what’s disturbing here is what the content suggests about how things should be connected. The real risk would seem to be that children exposed to recommendation algorithms at an early age might begin to emulate them cognitively, learning how to think, reason, and associate based on inhuman leaps of machine logic.

Bridle’s inability “to parse out the gap between human and machine” might soon apply not just to these sorts of YouTube videos but to the children who grew up watching them.

[Image: Replicants in Blade Runner].

One of my favorite scenes in Umberto Eco’s novel Foucault’s Pendulum is when a character named Jacopo Belbo describes different types of people. Everyone in the world, Belbo suggests, is one of only four types: there are “cretins, fools, morons, and lunatics.”

In the context of the present discussion, it is interesting to note that these categories are defined by modes of reasoning. For example, “Fools don’t claim that cats bark,” Belbo explains, “but they talk about cats when everyone else is talking about dogs.” They get their references wrong.

It is Eco’s “lunatic,” however, who offers a particularly interesting character type for us to consider: the lunatic, we read, is “a moron who doesn’t know the ropes. The moron proves his [own] thesis; he has a logic, however twisted it may be. The lunatic, on the other hand, doesn’t concern himself at all with logic; he works by short circuits. For him, everything proves everything else. The lunatic is all idée fixe, and whatever he comes across confirms his lunacy. You can tell him by the liberties he takes with common sense, by his flashes of inspiration…”

It might soon be time to suggest a fifth category, something beyond the lunatic, where thinking like an algorithm becomes its own strange form of reasoning, an alien logic gradually accepted as human over two or three generations to come.

Assuming I have read Bridle’s essay correctly—and it is entirely possible I have not—he seems disturbed by the content of these videos. I think the more troubling aspect, however, is in how they suggest kids should think. They replace narrative reason with algorithmic recommendation, connecting events and objects in weird, illogical bursts lacking any semblance of internal coherence, where the sudden appearance of something completely irrelevant can nonetheless be explained because of its keyword-search frequency. Having a conversation with someone who thinks like this—who “thinks” like this—would be utterly alien, if not logically impossible.

So, to return to this post’s beginning, one of the thrills of thinking like a writer, so to speak, is precisely in how it encourages one to bring together things that might not otherwise belong on the same page, and to work toward understanding why these apparently unrelated subjects might secretly be connected.

But what is thinking like an algorithm?

It will be interesting to see if algorithmically assembled material can still offer the sort of interpretive challenge posed by narrative writing, or if the only appropriate response to the kinds of content Bridle describes will be passive resignation, indifference, knowing that a data set somewhere produced a series of keywords and that the story before you goes no deeper than that. So you simply watch the next video. And the next. And the next.

Pleased to meet you. Hope you guess my name.

There was an interesting sequence of otherwise unrelated articles published over the last few days.

Over at Aeon, Murray Shanahan, a professor of “cognitive robotics,” asked: “Beyond humans, what other kinds of minds might be out there? From algorithms to aliens, could humans ever understand minds that are radically unlike our own?” He goes on to discuss, and even graph out, “the space of possible minds.” Briefly, I’m reminded of one of my favorite quotations of all time, from author William S. Burroughs, who, in his book The Ticket That Exploded, described “a vast mineral consciousness near absolute zero thinking in slow formations of crystal,” hidden somewhere inside the surface of the Earth. Try understanding—and conversing with—that.

As an aside, I generally find these sorts of discussions—including, most of all, the Turing Test—to be oddly fixated not on consciousness at all, but specifically on the social mores and recognizable etiquette of a well-educated middle class Western consciousness capable of rational conversation, something that is by no means synonymous even with human self-awareness, let alone with sentience itself. Engaging in conversation with your own coworkers can already be unnervingly impossible, let alone recognizing the potential intelligence of a sea urchin, a virus, a geomagnetic field, or a pulsar. Or, for that matter, a “time crystal.”

In any case, while some of us are contemplating the existence of other types of minds, those other types of minds might simply be trying to rip us off—or so the New York Times suggested in an article called, “As Artificial Intelligence Evolves, So Does Its Criminal Potential.”

In a scenario that sounds like something from Rivka Galchen’s recent book, Atmospheric Disturbances, we’re told to “imagine receiving a phone call from your aging mother seeking your help because she has forgotten her banking password. Except it’s not your mother. The voice on the other end of the phone call just sounds deceptively like her. It is actually a computer-synthesized voice, a tour-de-force of artificial intelligence technology that has been crafted to make it possible for someone to masquerade via the telephone.”

You can read the rest of the article, but there’s something oddly hilarious in the fear that we might finally encounter another form of radically inhuman intelligence—only for it to prank call us, spam us, and con us out of our life savings.

And then it gets worse. According to Quartz reports, researchers at MIT are using Artificial Intelligence “to create pure horror.” “A series of algorithms dubbed the Nightmare Machine is an effort to find the root of horror by generating ghoulish faces, and then relying on user feedback to see which approach makes the freakiest images,” we read.

To be completely honest, the resulting images are disappointing and stupid—a Target Halloween costume aisle is more frightening—but the notion, not that we will encounter an alien intelligence intent on terrifying us, but that we will deliberately create one specifically for this purpose is excellent evidence for anyone wondering how humans have made it this far.

The Neurological Side-Effects of 3D

[Image: Auguste Choisy].

France is considering a ban on stereoscopic viewing equipment—i.e. 3D films and game environments—for children, due to “the possible [negative] effect of 3D viewing on the developing visual system.”

As a new paper suggests, the use of these representational technologies is “not recommended for chidren under the age of six” and only “in moderation for those under the age of 13.”

There is very little evidence to back up the ban, however. As Martin Banks, a professor of vision science at UC Berkeley, points out in a short piece for New Scientist, “there is no published research, new or old, showing evidence of adverse effects from watching 3D content other than the short-term discomfort that can be experienced by children and adults alike. Despite several years of people viewing 3D content, there are no reports of long-term adverse effects at any age. On that basis alone, it seems rash to recommend these age-related bans and restrictions.”

Nonetheless, he adds, there is be a slight possibility that 3D technologies could have undesirable neuro-physical effects on infants:

The human visual system changes significantly during infancy, particularly the brain circuits that are intimately involved in perceiving the enhanced depth associated with 3D viewing technology. Development of this system slows during early childhood, but it is still changing in subtle ways into adolescence. What’s more, the visual experience an infant or young child receives affects the development of binocular circuits. These observations mean that there should be careful monitoring of how the new technology affects young children.

But not necessarily an outright ban.

In other words, overly early—or quantitatively excessive—exposure to artificially 3-dimensional objects and environments could be limiting the development of retinal strength and neural circuitry in infants. But no one is actually sure.

What’s interesting about this for me—and what simultaneously inspires a skeptical reaction to the supposed risks involved—is that we are already surrounded by immersive and complexly 3-dimensional spatial environments, built landscapes often complicated by radically diverse and confusing focal lengths. We just call it architecture.

Should the experience of disorienting works of architecture be limited for children under a certain age?

[Image: Another great image by Auguste Choisy].

It’s not hard to imagine taking this proposed ban to its logical conclusion, claiming that certain 3-dimensionally challenging works of architectural space should not be experienced by children younger than a certain age.

Taking a cue from roller coasters and other amusement park rides considered unsuitable for people with heart conditions, buildings might come with warning signs: Children under the age of six are not neurologically equipped to experience the following sequence of rooms. Parents are advised to prevent their entry.

It’s fascinating to think that, due to the potential neurological effects of the built environment, whole styles of architecture might have to be reserved for older visitors, like an X-rated film. You’re not old enough yet, the guard says patronizingly, worried that certain aspects of the building will literally blow your mind.

Think of it as a Schedule 1 controlled space.

[Image: From the Circle of Francesco Galli Bibiena, “A Capriccio of an Elaborately Decorated Palace Interior with Figures Banqueting, The Cornices Showing Scenes from Mythology,” courtest of Sotheby’s].

Or maybe this means that architecture could be turned into something like a new training regimen, as if you must graduate up a level before you are able to handle specific architectural combinations, like conflicting lines of perspective, unreal implications of depth, disorienting shadowplay, delayed echoes, anamorphic reflections, and other psychologically destabilizing spatial experiences.

Like some weird coming-of-age ceremony developed by a Baroque secret society overly influenced by science fiction, interested mentors watch every second as you and other trainees react to a specific sequence of architectural spaces, waiting to see which room—which hallway, which courtyard, which architectural detail—makes you crack.

Gifted with a finely honed sense of balance, however, you progress through them all—only to learn at the end that there are four further buildings, structures designed and assembled in complete secrecy, that only fifteen people on earth have ever experienced. Of those fifteen, three suffered attacks of amnesia within a year.

Those buildings’ locations are never divulged and you are never told what to prepare for inside of them—what it is about their rooms that makes them so neurologically complex—but you are advised to study nothing but optical illusions for the next six months.

[Image: One more by Auguste Choisy].

Of course, you’re told, if it ever becomes too much, you can simply look away, forcing yourself to focus on only one detail at a time before opening yourself back up to the surrounding spatial confusion.

After all, as Banks writes in New Scientist, the discomfort caused by one’s first exposure to 3D-viewing technology simply “dissipates when you stop viewing 3D content. Interestingly, the discomfort is known to be greater in adolescents and young adults than in middle-aged and elderly adults.”

So what do you think—could (or should?) certain works of architecture ever be banned for neurologically damaging children under a certain age? Is there any evidence that spatially disorienting children’s rooms or cribs have the same effect as 3D glasses?