The researchers used Stable Diffusion, a deep learning AI model developed in Germany in 2022, to analyse the brain scans of test subjects shown up to 10 000 images while inside an MRI machine.
“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy'”. “This is not mind-reading,” Takagi said. “Unfortunately there are many misunderstandings with our research.” Takagi and Nishimoto’s research generated much buzz in the tech community, which has been electrified by breakneck advancements in AI, including the release of ChatGPT, which produces human-like speech in response to a user’s prompts.
In Takagi and Nishimoto’s research, subjects had to sit in an fMRI scanner for up to 40 hours, which was costly as well as time-consuming. Takagi and Nishimoto’s framework could be used with brain-scanning devices other than MRI, such as EEG or hyper-invasive technologies like the brain-computer implants being developed by Elon Musk’s Neuralink.For a start, the method cannot yet be transferred to novel subjects. Because the shape of the brain differs between individuals, you cannot directly apply a model created for one person to another.
It first need to tell us what is a mind. You can’t study what you don’t know. I am sure they are referring to brain activity not mind