October 10, 2014

Welcome to the Brain-Computer Interface Laboratory at East Tennessee State University, directed by Dr. Eric Sellers. The laboratory is located within the ETSU College of Arts and Sciences, Department of Psychology.

Amyotrophic lateral sclerosis, commonly known as ALS or Lou Gehrig’s Disease, is a progressive neurodegenerative illness that results in weakening of the connections between the brain and body. In the late stages, these individuals have no ability to move or speak, though for the most part they retain normal cognitive function. This condition is referred to as “locked-in syndrome” or LIS. Our lab studies how people can use electrical activity recorded from the scalp, known as the electroencephalogram or (EEG), to control computers for the purposes of communication with others and control of their environments. By recording EEG signals from the scalp, then detecting specific features of the EEG activity, we can translate their brain activity into actions. The primary function of brain-computer interfaces (BCIs), as we see it, is to allow people to regain the control and communication they have lost due to paralysis caused by ALS or acute events such as brainstem stroke.

Our research is dependent upon a variety of fields including psychology, psychophysiology, cognitive neuroscience, computer science, electrical engineering, rehabilitation engineering, and neurology, to name a few. Initially, we test experimental procedures and manipulations in the laboratory. Procedures that yield the best results (in terms of speed and accuracy of communication) are then translated to formats specifically designed for people with severe motor disabilities. These protocols are then tested with disabled individuals in the laboratory, in the home, and in the hospital environment. Our ultimate goal is to develop a BCI system that is robust and portable enough to meet the daily communication and social interaction needs of severely disabled individuals.

The basic principles of any BCI are as follows:

(Click image to enlarge)
The figure shows a schematic of the essential components of a BCI system, illustrated as follows: 1) Signal acquisition, the recording of the brain signal. This signal is then digitized for analysis. 2) Signal processing, the conversion of the raw signal into a useful device command. This involves both feature extraction, the identification of meaningful changes in the signal, and feature translation, the conversion of those signal changes into a device command. 3) Device output, the overt command or control functions that are administered by the BCI system. These outputs can range from word processing and communication to higher levels of control such as driving a wheel chair or controlling a prosthetic limb. All of these elements work in concert to give the user control over his or her environment. (Modified from: Leuthardt et al. (2006). The emerging world of motor neuroprosthetics: a neurosurgical perspective. Neurosurgery, Volume 59(1): 1-14.)

The brain signal that our lab is most interested in using for BCI control is called the P300 event-related potential (the P300- BCI ). It was initially discovered by Sam Sutton et al (1965) and was first used as a means of implementing a BCI by Larry Farwell & Emanuel Donchin (1988). Since 1988 many scientific papers have been published on the use of the P300 as a BCI . As you can see, in terms of scientific discovery BCI research is still in its infancy, and we expect to see many advances to help profoundly disabled people in the future.

The standard 6×6 P300 speller matrix:

An example of a 6×6 P300 speller matrix configured for a calibration exercise. At the top, the word “DOG” is presented. The letter in parentheses (D) is the current target letter. As rows and columns flash successively, the user is asked to count how many times the letter ‘D’ (the target) flashes. This results in a P300 response being generated each time the row or column containing the target flashes. The twelve-flash series is repeated a predetermined number of times. The responses for each row and column are averaged, and a classifier is applied to determine how closely each averaged response resembles the P300. The intersection of the row and column with the highest classification values is selected. In this case, the row and column containing the target letter ‘D’ would be selected, and a “D” would be presented as feedback to the user on the line below the presented word “DOG” at the top of the matrix.

An example of a person severely disabled by ALS using a BCI in his home:

(Click image to enlarge)
The right panel shows a person using a BCI in his home. He is wearing an eight channel electrode cap, which is used to record the EEG signals that control the BCI. The left panel shows a close-up of his computer screen. The P300 speller is located on the right of the screen, a text editor is located at the top left of the screen, and a predictive speller program is located at the bottom left of the screen. The predictive speller works by examining the letters chosen for the current word and presenting shortcuts for the most common words beginning with those letters. For example, selecting “B-R-2” might result in typing out “B-R-AIN”. This setup increases the efficiency of the user’s communication.


No comments

Be the first one to leave a comment.

Post a Comment