Speech synthesizer plugs directly into the brain

    “Recordings” from the surface of the brain create unprecedented ideas for scientists on how paralyzed people control their speech with the help of the brain.

    Can a paralyzed person who is unable to speak, for example, as physicist Stephen Hawking, use a brain implant in order to conduct a conversation?

    Today, this is the main goal of constantly evolving research at US universities, which for more than five years has proved that recording devices placed under the human skull can detect brain activity associated with human conversation.

    image


    While the results are preliminary, Edward Chang, a neurosurgeon at the University of California, San Francisco, says he is working on a wireless neurocomputer interface that can translate brain signals directly into audible speech using a voice synthesizer.

    The work on creating a speech prosthesis is based on the success of the experiments: paralyzed people acting as volunteers used brain implants to manipulate robotic limbs at the expense of their thoughts (see. " Thought experiment "). This technology is viable and works due to the fact that scientists are able to approximately interpret the excitations of neurons inside the motor region of the cerebral cortex and compare them with the movements of the arms or legs.
    image

    Now Chang’s team is trying to repeat the same for a person’s ability to talk. This task is much more complicated, partly because in the aggregate human language is unique for him and this technology cannot be easily tested, for example, on animals.

    At his university, Chang conducts speech experiments in conjunction with brain operations that he performs on epilepsy patients. A plate of electrodes placed under the skull of patients records electrical activity from the surface of the brain. Patients wear a device known as an “electrocorticography array” for several days so that doctors can find the exact source of the epileptic seizure.

    image

    Chang studies brain activity in his patients as they can talk. In an article in the journal Nature last year, he and his colleagues described how they used an electrode array to display a model of electrical activity in a region of the brain called the ventral sensorimotor cortex when patients pronounced simple words that looked like just sounds, like “ bah ”(“ nonsense ”),“ goo ”(“ mucus ”), etc.

    The idea is to record electrical activity in the motor region of the cerebral cortex, which drives the lips, tongue and vocal cords when a person is talking. According to mathematical calculations, the Chang team showed that from these data they can distinguish “many key phonetic features”.

    One of the worst consequences of the disease, like lateral (amyotrophic lateral) sclerosis (ALS), is how paralysis spreads, people lose not only their ability to move, but also their ability to speak. Some ALS patients use devices that allow the use of residual ability to communicate. In the case of Hawking, he uses software that allows him to pronounce words very slowly in syllables, contracting the muscles of his cheeks. Other patients use eye tracking devices (“i-trackers”) to control a computer mouse.

    image

    The idea of ​​using a neurocomputer interface to achieve near-spoken language was proposed even earlier, a company that has been testing technology since 1980 that uses a single electrode to record directly inside the human brain in people with “locked inside” syndrome (waking coma). In 2009, the company described the work on decoding the speech of a 25-year-old paralyzed person who is unable to move and speak.

    Another study published this year by Mark Slutsky from Northwestern University - he made an attempt to decipher signals in the motor area of ​​the cerebral cortex when patients read aloud words containing all 39 phonemes of the English language (consonants and vowels that make up a speech) . The team identified phonemes with an average accuracy of 36 percent. The study used the same types of surface electrodes that Chang used.

    Slutsky says that so far this accuracy may seem very low, but it has been achieved with a relatively small selection of words spoken in a limited amount of time. “We expect much better decoding results in the future,” he says. A speech recognition system can also help you understand what words people are trying to say, scientists say. Telebreeze Team

    prepared our article. Our Facebook and Twitter page

    Also popular now: