Sunday, March 12, 2023

AI modelling that help to re-create what people see by reading brain scans

As neuroscientists struggle to demystify how the human brain converts what our eyes see into mental images, artificial intelligence (AI) has been getting better at mimicking that feat. A recent study, scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen. As this technology develops, researchers say, it could have numerous applications, from exploring how various animal species perceive the world to perhaps one day recording human dreams and aiding communication in people with paralysis.


Unlike previous efforts using AI algorithms to decipher brain scans, which had to be trained on large data sets, Stable Diffusion was able to get more out of less training for each participant by incorporating photo captions into the algorithm. It’s a novel approach that incorporates textual and visual information to “decipher the brain,” says Ariel Goldstein, a cognitive neuroscientist at Princeton University who was not involved with the work.

The AI algorithm makes use of information gathered from different regions of the brain involved in image perception, such as the occipital and temporal lobes, according to Yu Takagi, a systems neuroscientist at Osaka University who worked on the experiment. The system interpreted information from functional magnetic resonance imaging (fMRI) brain scans, which detect changes in blood flow to active regions of the brain. When people look at a photo, the temporal lobes predominantly register information about the contents of the image (people, objects, or scenery), whereas the occipital lobe predominantly registers information about layout and perspective, such as the scale and position of the contents. All of this information is recorded by the fMRI as it captures peaks in brain activity, and these patterns can then be reconverted into an imitation image using AI.