Stanford University researchers have published a paper claiming a record of thought-to-text communication via a brain-computer interface (BCI), with a subject able to “speak” at a rate of 62 words per minute – three times that faster than rival approaches.

In a preprint, which has not yet been peer-reviewed, and which was brought to our attention by MIT Technology Reviewthe team explains the inner workings of its new “neuroprosthesis” – a brain-computer interface (BCI) that uses arrays of intracortical microelectrodes to capture high-resolution recordings of its users’ brain activity associated with speech.

A new kind of brain-computer interface has been shown to vastly outperform the competition for speech recognition. (📹: Willet et al)

To prove the concept, the team recruited a single study participant – an unidentified member of the public with amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, whose illness led to a inability to generate intelligible speech. When fitted with the BCI neuroprosthesis, the subject was able to think words and decode them at a rate of 62 words per minute – more than three times faster than state-of-the-art BCI speech systems.

Speed ​​isn’t good without accuracy, of course, but here the team also claims a major milestone: On a limited vocabulary of 50 words, the system showed an error rate of 9.1%, or nearly three times fewer errors than its rivals, and while increasing to a 125,000-word vocabulary boosted the error rate to 23.8%, but it proved usable.

However, the sensor system doesn’t actually detect speech-related thoughts, instead focusing on movement – building on previous work using the same system to control robotic arms or a keyboard at the time. screen. The subject simply tries to speak, and implanted sensors record brain activity associated with speech-related mouth and facial movements for decoding by a specially trained Recurrent Neural Network (RNN) – even if the user’s mouth is unable to actually move.

“Our demonstration is a proof of concept that decoding attempted speech movements from intracortical recordings is a promising approach,” admit the researchers, “but it is not yet a complete and clinically viable system. work to be done to reduce the time it takes to train the decoder and adapt to changes in neural activity that occur over days without requiring the user to pause and recalibrate the BCI. , a word error rate of 24% is probably still not low enough for everyday use.”

Preprint is available on Cold Spring Harbor Laboratory’s bioRxiv server now freely available.

Source link

Leave A Reply