AI
A 3D printed Android mascot Bugdroid is seen in front of a Google logo in this illustration taken July 9, 2017. Picture taken July 9, 2017. Reuters

Researchers of Purdue University, USA have developed a technology using Artificial Intelligence which could visualize people's sights, thoughts, and imaginations. 

The newly developed technology uses fMRI scans to detect brain activities of three volunteered women who watched videos, to create computer visuals of the videos using the Artificial Intelligence. The convolutional neural network algorithm which has been used for face detection and for recognizing objects in smartphones and computers had been used for the research.

Zhongming Liu, an assistant professor in Purdue University's Weldon School of Biomedical Engineering and School of Electrical and Computer Engineering, said in a press release, "That type of network has made an enormous impact in the field of computer vision in recent years. Our technique uses the neural network to understand what you are seeing."

The Mindreading technology was developed based on convolutional neural networks which form deep-learning algorithms. It studied the way brain processed static images and other visual stimuli. The researchers could now visualize how the brain processes movies or natural scenes. Haiguang Wen, a doctoral student, stated that this was a step towards decoding the brain while people are trying to make sense of complex and dynamic visual surroundings.

The research paper which appeared online in the journal Cerebral Cortex on Oct. 20 stated that fMRI data from three women who were subjected to watch 975 video clips for a span of 11.5 hours helped to notice brain activities and to predict the activities of the brain's visual cortex. These data models were used to recreate videos which had images that the women had never watched before.

The new computer-generated video model was kept side-by-side with the actual video image for interpretations. It was found that the computer generated video had accurately decoded the fMRI data into specific categories of images. 

Wen stated that the generated visuals included objects like a water animal, the moon, a turtle, a person, a bird in flight. "I think what is a unique aspect of this work is that we are doing the decoding nearly in real time, as the subjects are watching the video. We scan the brain every two seconds, and the model rebuilds the visual experience as it occurs," he said.