A team of neuroscientists reported that a noninvasive technology enabled a conscious thought from the brain of one human volunteer to be directly communicated to another person miles away.
What if a person could directly communicate a thought or concept to someone else without saying word — or even seeing them? That is the possibility raised by a new paper from a team of neuroscientists in Barcelona, Strasbourg, and Boston, who reported that a conscious thought from the brain of one human volunteer was directly communicated to another person miles away using noninvasive technology.
Although it was transmitted via the Internet, the communication traveled not so much at the speed of light as the speed of drying paint. It took more than an hour for a single four-letter word to be transmitted, and the communication needed to be deciphered by researchers on the receiving end. Nonetheless, the proof-of-principle demonstration marked the first published study of the sort of mind-reading technology once considered the exclusive domain of psychics and lovers.
The study, published Aug. 19 in the open-access journal PLoS One, was co-authored by Giulio Ruffini, PhD, a physicist and chief executive officer of Starlab, a Spanish company that develops space and neuroscience technologies, including the electroencephalogram (EEG) and transcranial magnetic stimulation (TMS) devices used in the study.
“The goal was to demonstrate that it is possible to establish brain-to-brain communication using noninvasive electromagnetic means,” Dr. Ruffini told Neurology Today in an email. “Such technologies could revolutionize human communication. Who knows, we may transmit emotion or complex concepts some day.”
Because the study required the subjects to consciously send and detect encoded information, Dr. Ruffini said the experiment should be considered not just brain-to-brain communication but mind-to-mind communication as well. As described in the paper, however, the communication was more Morse code than Shakespearean sonnet.
In March and April, a volunteer in the city of Thiruvananthapuram, India, was fitted with an EEG. He was then asked to look at a computer monitor that displayed a target either at the top or bottom of the screen. If the target was at the top of the screen, the volunteer was to imagine moving his hands; if at the bottom, he was to imagine moving his feet. A filter applied to three of the EEG electrodes detected signals characteristic of these two imagined movements.
The two EEG signals, characteristic of either hand or foot movements, were translated into a sort of binary code. The hand movements were given a value of 1, while the foot movements were assigned a value of 0. To use this binary code as the basis of an alphabet, the researchers employed an ancient cipher code consisting of a series of five digits, or “bits,” each one associated with either hand or foot movements — that is, 1 or 0. The letter A was encoded as 00000, the letter B became 00001, the letter C became 00010, and so on. In this manner, any word could be transmitted just by having the emitter imagine moving his hands or feet.
The experimenters sought to transmit one of two words: “ciao” or “hola.” Four letters, with five bits of either 1 or 0 per letter, required 20 bits per word. But to overcome possible errors in sending or receiving, they encoded each word seven times. To avoid having the receiver simply guess the correct word, all 140 bits were then randomly encoded in a second cipher system known only to the investigators.
In other words, the sender was asked to do nothing more than view a series of 140 targets on a screen, up or down, and to imagine moving his hands or feet in response.
The results of these EEG recordings were transmitted via the Internet to Strasbourg, France, where three volunteers, each wearing a TMS device, were stimulated with biphasic TMS pulses at a subject-specific occipital cortex site, with the direction of the pulses determined by the bit value (1 or 0). One orientation of the TMS-induced electric field produced phosphenes, or flashes of light visible in the periphery of their closed eyes, representing the bit value of 1, and the other direction produced no phosphenes, representing 0.
The orientation-encoding scheme was designed to prevent the subjects from becoming aware of the transmission content based on sensorial cues. In addition, the researchers controlled the device robotically and had subjects wear earplugs and an eye mask.
Despite a combined error rate of up to 11 percent in transmitting each of the 140 bits, Dr. Ruffini’s team reported that the seven-fold redundancy resulted in perfect deciphering of the encoded words. Transmitting those 140 bits, however, took about 70 minutes in all.
“I believe we may be able to leverage further on cortical communication in the future, by targeting brain networks rather than local cortical spots,” Dr. Ruffini said. “This will widen the bandwidth considerably.”
Seung-Schik Yoo, PhD, an associate professor of radiology at Harvard Medical School, who has carried out similar studies, called the demonstration “pretty cool. I’m glad they did it, but there’s a lot of progress that needs to be made,” he said. “It will have to get much faster if it’s ever going to be practical.”
Last year, Dr. Yoo published a study in which human volunteers controlled the movement of a rat’s tail through a noninvasive brain-to-brain interface. In his study, published in PLoS One, the human volunteers had to look directly into a strobe light on a computer screen, either in the center or at the corner of the screen.
“If you intentionally look at the strobe-light signal, your brain syncs to it,” Dr. Yoo said. “If you start thinking of something else, the synchronization goes away.”
When the EEG detected the synchronization, a signal was sent to a transcranial focused ultrasound device on an anesthetized Sprague-Dawley rat that excited the animal’s motor area, eliciting a tail movement.
Rajesh Rao, PhD, an associate professor of computer science and engineering and director of the Neural Systems Laboratory at the University of Washington, and Andrea Stocco, PhD, a research assistant professor in psychology at UW, led a similar study, which has been accepted for publication in PLoSOne.
In an email from India, where he is on a Fulbright scholarship, Dr. Rao said that the paper published by Dr. Ruffini’s group used the same basic approach he had developed, namely, combining EEG in the sender with TMS in the receiver. However, in Dr. Rao’s experiment, the TMS stimulated the receiver’s motor cortex rather than the visual cortex, thereby initiating a finger movement in the receiver as the two participants cooperated in playing a computer game.
“Unlike our system, which is a synchronous brain-to-brain interface where each person is interacting live with the other, their system is asynchronous,” Dr. Rao said. “It uses email to send EEG signals. Also, the sender does not receive any feedback from the receiver, whereas in our system, the two brains are actively cooperating with one another to solve a task (a computer game).”
However, Dr. Ruffini pointed out that in the University of Washington study, the effect of TMS stimulation became known to the receiver only when it caused a movement of his or her finger. “In our experiment, no sensory cues such as touch or sight were used to communicate the message. Transmission was purely brain-to-brain,” he said.
Both researchers agreed that advances in brain-to-brain interfaces could permit radically new forms of nonverbal communication between humans, including perhaps the transfer of abstract knowledge and skills directly.
“Eventually those who are paralyzed could use the technology to communicate their thoughts, emotions, and feelings directly to loved ones,” Dr. Rao said. “It could lead to better sensory prostheses for the blind and deaf. Further into the future, people may adopt brain-to-brain interfaces as a new way to collaborate and solve challenges facing humanity that a single individual may be unable to solve.”