Multiple brain-computer interface (BCI) devices can allow now users to do everything from control computer cursors, to translate neural activity into words, to convert handwriting into text. While one of the latest BCI examples appears to accomplish very similar tasks, it does so without the need for time-consuming, personalized calibration or high-stakes neurosurgery.
As recently detailed in a study published in PNAS Nexus, University of Texas Austin researchers have developed a wearable cap that allows a user to accomplish complex computer tasks through interpreting brain activity into actionable commands. But instead of needing to tailor each device to a specific user’s neural activity, an accompanying machine learning program offers a new, “one-size-fits-all” approach that dramatically reduces training time.
“Training a BCI subject customarily starts with an offline calibration session to collect data to build an individual decoder,” the team explains in their paper’s abstract. “Apart from being time-consuming, this initial decoder might be inefficient as subjects do not receive feedback that helps them to elicit proper [sensorimotor rhythms] during calibration.”
To solve for this, researchers developed a new machine learning program that identifies an individual’s specific needs and adjusts its repetition-based training as needed. Because of this interoperable self-calibration, trainees don’t need the researcher team’s guidance, or complex medical procedures to install an implant.
[Related: Neuralink shows first human patient using brain implant to play online chess.]
“When we think about this in a clinical setting, this technology will make it so we won’t need a specialized team to do this calibration process, which is long and tedious,” Satyam Kumar, a graduate student involved in the project, said in a recent statement. “It will be much faster to move from patient to patient.”
To prepare, all a user needs to do is don one of the extremely red, electrode-dotted devices resembling a swimmer’s cap. From there, the electrodes gather and transit neural activity to the researcher team’s newly created decoding software during training. Thanks to the program’s machine learning capabilities, developers avoided the time-intensive, personalized training usually required for other BCI tech to calibrate for each individual user.
Over a five-day period, 18 test subjects effectively learned to mentally envision playing both a car racing game and a simpler bar-balancing program using the new training method. The decoder was so effective, in fact, that wearers could train on both the bar and racing games simultaneously, instead of one at a time. At the annual South by Southwest Conference last month, the UT Austin team took things a step further. During a demonstration, volunteers put on the wearable BCI, then learned to control a pair of hand and arm rehabilitation robots within just a few minutes.
So far, the team has only tested their BCI cap on subjects without motor impairments, but they plan to expand their decoder’s abilities to encompass users with disabilities.
“On the one hand, we want to translate the BCI to the clinical realm to help people with disabilities,” said José del R. Millán, study co-author and UT professor of electrical and computer engineering. “On the other, we need to improve our technology to make it easier to use so that the impact for these people with disabilities is stronger.” Millán’s team is also working to incorporate similar BCI technology into a wheelchair.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Popular Science – https://www.popsci.com/technology/bci-wearable-cap/