A 67-year-old patient who volunteered to participate in a Stanford University experiment received a brain implant, after which she was able to say the simplest words at a speed of 62 words per minute (three times the previous record).
Please lend me some advice.
After being diagnosed with ALS (amyotrophic lateral sclerosis, or Lou Gehrig's disease), a woman was unable to speak until 8 years ago. She can only make isolated sounds and only communicate via a writing board or iPad.
Philip Sabes, an independent researcher at the University of California, San Francisco, described the findings as "a significant breakthrough" and that the experimental brain-reading technology is ready to leave the lab soon.
Even with the use of keyboards and devices with a wide array of emojis, text communication remains the fastest form of communication between humans. People who have speech limitations typically speak 160 words per minute.
MANAGER IN FINANCIAL SUBSIDIARIES
In only two months, you can become a professional financial manager.
Due to the passing of Krishna Shenoy, a Stanford researcher studied anatomy for several years and managed to type 18 words per minute, an all-time high.
The researchers working on the brain-computer interfaces shenoy's team is working on are embedded in the motor cortex, the area of the human brain responsible for organizing, planning, and executing voluntary conscious movements through their connection, which allows researchers to record the activity of several dozen neurons simultaneously and observe patterns that are consistent even when a person is paralyzed.
The implants enable a paralyzed person to manipulate a cursor on a screen, select letters on a virtual keyboard, play video games, or even operate a robotic arm by "decoding" neural signals in real time.
The Stanford researchers wanted to know if motor cortex neurons contain useful information about speech movements. That is, could they investigate how a patient tried to move his mouth and vocal cords when he spoke?
Sabes claims that only a few neurons contain enough information to enable a computer program to accurately anticipate what a patient is attempting to say.
The new interpretation builds on previous research by Edward Chang of the University of California, San Francisco, who believed that speech involves some of the most complex movements humans can perform. We place our upper teeth on our lower lip and expel air. This is just one of the dozens of mouth movements required to speak.
Chang previously used electrodes placed at the top of the brain to allow patient volunteers to communicate via computers, but the Stanford researchers claim that their method is more accurate and up to four times faster.
David Moses, who works with Chang's team, believes that the current work has produced "amazing new performance figures." But, even as records remain broken, he says, "it's important to demonstrate consistent and reliable performance over multi-year timescales." Any commercial brain implant will be difficult to legalize due to its accuracy and performance.
The development of more powerful implants and, perhaps, their closer integration with artificial intelligence are on the way. To increase its accuracy, software was used that predicted which word would most often follow in a sentence (for example, that "I" is more often followed by "am" than "ham," although these words sound similar and can trigger similar patterns in someone's brain).
GPT-3, a large language model, is capable of writing entire essays and answering questions. Connecting them to brain interfaces might allow individuals to speak faster, simply because the system would be capable of guessing what they're attempting to say based on partial information.
Shenoy's group is part of BrainGate, a research group that has implanted electrodes in the brains of more than a dozen individuals. They use a metal square that measures about 100 needle electrodes.
Neuralink, an Elon Musk-developed brain interfaces, and a startup called Paradromics, claim to have developed more advanced methods that can read data from tens of thousands of neurons at the same time.
A new paper suggests that counting on several neurons at the same time would make a difference, especially if the aim is to understand complex brain movements such as speech. The more neurons they read at the same time, the less mistakes they make when trying to comprehend what the patient said.