[Translation] What happens when AI can ask the brain what it wants to see?

[Translation] What happens when AI can ask the brain what it wants to see?

These images, obtained using an artificial intelligence algorithm called XDREAM, can stimulate certain neurons much better than any natural picture.

In April 2018, at the Harvard Laboratory, a monkey (named Ringo) showed strange images created using an artificial intelligence algorithm called XDREAM (a generative deep neural network with a genetic algorithm). Which gradually set them up to stimulate one particular neuron, in Ringo's brain, in an area that supposedly specializes in facial recognition.

A genetic algorithm looked for options for stimuli that maximized neural response. What led to the creation of synthetic images of objects with complex combinations of shapes, colors and textures. Images sometimes resembled animals or people, and in other cases new patterns were found that did not fit into any clear semantic category.

Images XDREAM look like paintings by Kandinsky. You probably don’t want to hang them on the wall. But each of them is close to the ideal stimulus for a particular neuron. And together they tell us something interesting about how our brain perceives the world and how much we still do not understand in this process.

Carlos Ponce (Associate Professor, Department of Neurology, Washington University, one of the authors of the project):
“At first the paintings were gray and shapeless. But over time, out of this haze, something began to look at us. If the cells see these images, this is what they dream about. This reveals the visual vocabulary of the brain. ”

The first hints of this dictionary appeared in 1962, when Torsten Wiesel and David Hubel showed that certain neurons in the visual centers of the brain are tuned to certain stimuli — light, movement in certain directions, or lines aligned in a certain way. Since then, other neuroscientists have identified neurons that respond to colors, curves, faces, hands, and external scenes. But there is one catch here - these scientists have always chosen which form to check and their intuition may not reflect the actual stimuli for which neurons are tuned.

Margaret Livingston (professor of neuroscience at Harvard University, one of the authors of the project):
“The fact that a cell responds to a certain category of images does not mean that you really understand what it wants.”

So why not ask the neurons what they want to see?

This was the idea behind the XDREAM project, an algorithm invented by a Harvard student named Will Xiao. The sets of these gray, shapeless images were shown to monkeys and the algorithm arranged and mixed those, which caused the strongest responses in the selected neurons, to create a new generation of images. Xiao trained XDREAM using 1.4 million real photos to generate synthetic images with natural properties. Over the course of 250 generations of neural networks, synthetic images became more and more efficient until they excited target neurons much more intensively than any natural image.

Carlos Ponce :
“It was interesting to finally let the cell tell us what it encodes, instead of guessing.”

However, there was a risk that XDREAM could become akin to the Rorschach test, during which people see what they want to see. To test this, the team used a different algorithm to confirm that synthetic images that they recognized as faces really looked more like real faces than other natural objects.They also showed that the neurons that induce XDREAM to create facial-like images respond best to photos of real faces.

Carlos Ponce :
“These images are so good at stimulating the monkeys' visual neurons that they also tickle our brain in such a way that we feel uncomfortable. If someone could use XDREAM on human neurons, would he find similar images or others, and what would we think about them? At the moment this is not done. But it makes me think. ”

Margaret Livingston :
“Are alarming XDREAM images hinting at why so many mythical creatures are exaggerated versions of familiar things. Visual neurons seem to be prone to
exaggerated. I think that gargoyles and gnomes, these archetypes that people imagine ... there is a basis for them in our brain. "
( In previous studies, her team showed that electoral cells would be more responsive to caricatures than real people. )

In addition to the strangeness of these images, the most surprising thing about them is that they are mostly unrecognizable. The team investigated 46 neurons in six monkeys, most of the images produced were a mix of color, texture and shape that did not fit into obvious categories.

Leila Isik (a neurologist at Johns Hopkins University):
“It is amazing that cells that were supposed to encode simple objects or parts of objects can, in fact, encode much more complex visual stimuli. Some may find it unsatisfactory that the generated images cannot be easily described in terms of semantic categories. However, this “limitation” may simply be the reality of the complex nature of the primate's visual cortex. ”

Through these experiments, researchers will learn more not only about the brain itself, but also about how to imitate it. Many neuroscientists are developing artificial neural networks that can analyze images and recognize objects, ostensibly by doing something close to what the real visual centers of the brain do. But how close?

To find out, Puya Bashivan (from the Massachusetts Institute of Technology) used a neural network to create images that theoretically should stimulate a specific area of ​​the brain in a special way. His colleagues, Kohiti Kar and James DiCarlo, then showed these synthetic images to monkeys to see if they worked as predicted.
The results were mixed. The neural network has managed to create images that stimulate certain neurons more strongly than natural photos. But they were not so good for another task: to excite one neuron, suppressing all its neighbors. This suggests that the network does not yet cover everything that can be known about the visual system.
Bashivan's team concentrated on a region of the brain that supposedly responds to simple curves. The images that his network created included grids, grids, and twists. Like the "hallucinogenic" XDREAM images, these complex images suggest that our understanding of how the brain sees the world is too simplistic.

Pua Bashivan :
“If we follow only human intuition, we can make mistakes. The best way is to create intelligent systems that contain all the knowledge in this area. ”

Carlos Ponce :
“As biologists, many of us are still skeptical that modern neural networks are similar enough to the brain to reliably simulate it. But this is the way forward and such studies will help improve them. Both approaches concern the understanding of the black box - the brain. Both methods are required. "

Source text: [Translation] What happens when AI can ask the brain what it wants to see?