As technology continues to grow, personal privacy continues to shrink—including your very thoughts. A new “mind reading” device that has been developed by Japanese researchers is capable of deciphering words from brain waves before they are spoken.
Scientists have discovered that electrical activity in the brain is the same whether or not someone speaks their thoughts. By analyzing various wave forms produced by the brain, the researchers were able to decipher words like “goo,” “scissors” and “par” before they were articulated by Japanese participants. (1)
The researchers that developed the technology claim they can identify brain waves attached to syllables or letters of the Japanese alphabet. In other words (forgive the pun), it is possible to decode entire words and sentences without articulating those words and sentences.
To “hear” an unspoken words, the team deployed a method called electroencephalogram, or EEG, which records electrical activity in the brain with an array of electrodes on the scalp that uncover brain waves.
The researchers focused on a region of the brain known as Broca’s area, which is responsible for processing language and speech.(1)
The head author of the study, Professor Yamazaki Toshimasa, an expert in brain–computer interfaces at the Kyushu Institute of Technology in Japan’s Fukuoka Prefecture, requested 12 men, women and children to recite a string of words. While the participants spoke, the researchers measured their brain waves.(1)
The scientists discovered that each syllable corresponded to a distinct brain wave pattern before the original thought was spoken. The researchers knew what the participants were going to say up to two seconds before speaking.
This was accomplished with a database of various sounds. The team found that they could match the specific brainwave patterns to words, regardless of whether they were spoken.
According to a paper presented at a conference sponsored by the Institute of Electronics, Information and Communication Engineers, the researchers’ algorithm effectively pinpointed the Japanese words haru for summer and natsu for spring, approximately 25 to 47 percent of the time.(1)
Furthermore, the researchers discovered that they could predict single characters approximately 88 percent of the time. Professor Toshimasa hopes the technology could be used to help mute people speak or paralyzed victims communicate. He told sources that they have trained the system to identify seven Japanese words but hope to increase its vocabulary in the near future.(1)
Of course, there are limits in trying to measure thoughts, which are irreducibly first-person experiences, with objective, third-person criteria.
For instance, scientists have very strong objective measures for deciphering things like anxiety and fear at a particular moment. Examples include a heightened amygdala response, sweat on the palm of the hands and checking whether their has been a spike in blood cortisol.
These are all objective, third-person ways to measure fear. Nevertheless, if half of patients were to walk into a lab claiming to experience fear but showed none of these signs, then their first-person experience would trump the researchers’ third-person data.
In fact, this is precisely what occurred in the most recent study, which failed to decipher the thoughts of more than half the people in the experiments. Nevertheless, Nishinippon, a Japanese paper that reported on these recent developments, claims this technology will eventually be able to identify brain waves linked to Japanese words like will, one, turning and do with 80 to 90 percent accuracy.(1)
At best, the study illustrates that you can think before you speak but you can’t speak before you think.