AlterEgo: a device that can read (some) thoughts

Original author: Rachel Slade
  • Transfer
At the beginning of April, a researcher at the Massachusetts Institute of Technology Arnav Kapoor for twenty-four years posted a short video on YouTube . The video shows how he walks around the campus, moving from one location to another; on the right side of his face is fixed a white plastic device.

First, he passes a row of bicycles parked near melted snowdrifts, his lips are closed, and not voiced thoughts are displayed on the screen. An inscription appears: “Time?”, And a man's voice replies: “Ten hours thirty-five minutes.” In the next scene, Kapoor is shopping at a local store. The price of each product that he throws into the basket (toilet paper, Italian sandwich, canned peaches) is displayed on the screen. “The total amount is $ 10.07,” the male voice responds. In the last scene, Kapoor moves the cursor on the screen, by all indications, by the power of thought.

Kapoor came from New Delhi to work at the Mass Lab of the Massachusetts Institute of Technology and create wearable devices that seamlessly integrate technology into our daily lives. In order not to reach for the phone anymore, not to stare at the screen, not to go with eyes down and not to fall out of reality to get involved in the process.

This will sound implausible, but AlterEgo - a device that works silently, without voice control and headphones that Kapoor has been developing for the past two years - is now reading his thoughts so successfully that he can order a taxi in Uber without saying a word.

The current version of the device (Kapoor created it in collaboration with his brother Shreya, a student at the same institute, several colleagues from the Fluid Interfaces department and his mentor, Professor Patti Maes) is a device printed on a 3D printer equipped with electro-magnetic sensors. It fits snugly on the jaw on one side of the face and, using Bluetooth, establishes a connection to what Maes calls our computer brain - that colossal network of information that we access up to 80 times a day via smartphones.

This invention can be considered revolutionary for the reason that it does not require deep implantation (i.e. implants) and is capable of processing non-verbal signals of human communication with an extremely high degree of accuracy. Kapoor promises that in the future it will also become almost invisible to others.


A few months after the video was published, Kapoor gave the Medium team an interview in a small office, where he works with other researchers, on the fifth floor in the Media Lab building. He is smooth-shaven, neatly dressed, and thin as a student; the glance seems sleepy or scorchingly close - this makes an impression. Among the rubble of books and parts in the office you can see a pink ukulele, as he claims, not his.

Kapoor is naturally prone to verbosity, but since his invention began to attract the attention of the press, he clearly began to hone his narrative. “Artificial intelligence is my passion,” he says. - "I believe that the future of mankind is based on cooperation with computers."

Since smartphones entered the market, two and a half billion people have already resorted to using the computer brain when they need to go somewhere, cook something, contact someone or remember the capital of Missouri. Cognitive reinforcement in the form of technology has become an integral part of our lives. There is an organic brain, and there is a computer. According to Kapoor, they are already working in conjunction, just not as efficiently as they could.

However, modern devices are designed so that they distract us rather than provide assistance. To find the necessary information in a limitless world that is always at hand, we have to give all our attention to the process. Screens require eye contact, working with the phone, you have to wear headphones. Devices drag us from physical reality to their own.

Kapoor wants to perfect a device that allows people to interact with artificial intelligence as intuitively as the right hemisphere interacts with the left, so that we can integrate the possibilities that the Internet provides into our thought process at different levels. “This is how our life will look in the future,” he says.

An Early Design Version

While working on the AlterEgo design concept, Kapoor was guided by several principles. The device should not require the introduction of any elements into the body: according to the researcher, this is inconvenient and not applicable on a large scale. Interaction with it should feel natural and occur imperceptibly for others - accordingly, the device should be able to read non-verbal signals. Clearly realizing how easy it is to apply this technology for unseemly purposes, he also wanted the user’s ability to control the process to be embedded in the design itself, that is, only intentionally delivered signals, and not unconscious ones, were captured. In other words, the device should read your thoughts only when you yourself want to share them.

Other pioneers in this area have already developed interfaces for communication between humans and computers, but there have always been some limitations. To communicate with Siri or Alexa, you need to access the machine aloud, which seems unnatural and does not allow you to maintain privacy. The spread of this technology is hindered by an obsessive fear that with such devices you can never be sure who is listening to us and what they are hearing.

Kapoor needed to come up with a way out of this situation. What if the computer learned to read our minds?


As a researcher who “tried himself in different disciplines” (once he tried to write briefly about himself for the site, but did not succeed - he did not want to lock himself in one specialty), Kapoor began to perceive the human body not as a set of restrictions but as a conductor. He saw it this way: the brain is a power source for a complex electrical neural network that controls our thoughts and movements. Say, when the brain needs us to move our finger, it sends an electrical impulse along the arm to the desired point, and the muscles react accordingly. Sensors are ways to pick up these electrical signals - it remains only to determine where and how to connect to the process.

Kapoor knew that when reading to himself, our internal articulatory muscles are in motion, unconsciously reproducing the words that we see. “When you speak out loud, the brain sends out impulse instructions to more than a hundred muscles in the speech apparatus,” he explains. Internal vocalization - that is, what we do, reading to ourselves - is the same process, only expressed much weaker: the neural signals enter only the internal muscles of the speech apparatus. This habit develops in people when they are just learning to read, saying aloud the letters, and then the words. This may interfere in the future - quick reading courses often pay special attention to weaning people to pronounce words in their heads when they run their eyes through the text.

These neural signals, first recorded in the mid-19th century, are the only physical expression of intellectual activity that we know today.

Kapoor wondered if the detectors could detect the physical manifestations of the internal monologue — microscopic electrical discharges emanating from the brain — through the skin of the face, even though the muscles involved were much deeper, in the mouth and throat. And despite the fact that they do not work fully.

Identification of contact points

In its prototype form, AlterEgo was a frame that attached 30 sensors to the face and jaw of an object so that they could read neuromuscular movements. The object, meanwhile, said the necessary messages to itself. The team has developed special programs for analyzing signals and translating them into specific words.

There was one problem: at first, AlterEgo sensors didn’t catch anything at all.

Having written the software and assembled the device, Kapoor hoped for the best, however, the myoelectric signals that generated the internal speech were extremely weak. At that moment, it would be very simple to abandon this idea. “But we wanted to capture the interaction as close to the stage of pure thought as possible,” Kapoor explains. He moved the sensors to different parts of the face, made them more sensitive, reconfigured the programs - everything was useless.

One evening, the brothers tested the device in their apartment in Cambridge. Kapoor put it on himself, and Shreya watched the situation on the computer screen. They set up the device so that it transmitted signals in real time, so that Shreya could accurately determine the moment when something is considered, if it happens at all.

It went to night. Kapoor had been talking silently with the device for about two hours. So far, he has been programmed to interpret two words, “yes” and “no,” and it has not brought any significant results. But then Shreya thought he saw something. Something flickered on the screen.

“We could not believe our eyes,” says Kapoor. He turned his back to his brother and repeated the procedure. “The jump in the signal was repeated time after time, but we thought it was just a malfunction in the wires. We were sure that everything was due to interference in the system. ” Did they really see something worthwhile? After an hour of endless testing, Kapoor made sure that the contact was established.

“We almost went crazy,” he says. The next day, the event was celebrated with pizza.


It took Kapoor and his colleagues two years to create hardware and software for AlterEgo. The device was designed so that it could be worn without inconvenience, the team improved the sensors and revised the contact points to make the shell compact and not too eye-catching. Kapoor refused headphones, which, in his opinion, disrupt the normal course of human life; instead, he developed a bone conduction based acoustic system. The device whispers answers to requests, like a kind of overdone guardian angel.

When the device began to recognize myoelectric pulses, Kapoor focused on collecting a volume of data on the basis of which AlterEgo could be trained to compare characteristic signals with certain words. It was a time-consuming process: I had to sit for a long time in the laboratory with a device on my face, repeating to myself the necessary words until the computer mastered them. AlterEgo currently has a vocabulary of 100 words, including the names of numbers from 1 to 9 and the commands: “add”, “subtract”, “answer”, “call”.

From the video on YouTube, it seemed as if the device was reading Kapoor’s thoughts, so there was an indicative panic. “In fact, it’s very scary that someone else can now access what we think,” one worried commentator wrote aboutarticles that talked about this technology . “With this technology, the police of thoughts can become a reality.”

Kapoor and Maes, an expert in AI, are very sensitive to such ethical issues. Kapoor believes that he, as the creator of the technology, has the ability to prevent its use for immoral purposes by embedding fuses directly into the concept. Kapoor emphasizes that AlterEgo cannot literally read thoughts and will never gain such an opportunity. He consciously created a system that responds only to signals given intentionally - that is, to voluntary communication. In order to interact with the computer brain, you yourself must want to pass on this or that information. This is the difference between AlterEgo and, say, Google Glass. Also, the device does not have a camera, because Kapoor wants his wearable devices to have only the data that you actively transmit to them.

“Artificial intelligence itself does no harm to anyone, but you should not hush up the fact that this technology can be turned into evil,” Kapoor says. “So we are trying to ensure that our devices comply with the principles that we adhere to.” That is why we developed AlterEgo from scratch on our own - we had a certain idea of ​​what should happen, and we wanted people to use it the way it was intended. ”

Kapoor, who worked on a number of projects in conjunction with Harvard Medical School, primarily seeks to make life easier for those with health problems. For example, people with Alzheimer's could wear this device to make up for memory impairments. At the same time, thanks to his ability to read neural microsignals, he could provide assistance in interacting with the outside world to those who are physically impaired - deaf and dumb, having a stroke, prone to Charcot's disease or autism.

To bring AlterEgo in a really working state, Kapoor has to train him for a long time, expanding his vocabulary far beyond a hundred words. In addition, he will need to collect enough data to make sure that the device will work on any head and with any internal monologue. At the same time, he is convinced that the technology is so good that sooner or later it will learn to synthesize information and extrapolate the meaning of new words from the context.


In the sparkling modern offices of Media Lab, it’s very simple to allow ourselves to captivate with a picture of a radiant cloudless future, when we will immediately use two brains - the one with which we were born and the computer to which we voluntarily tied ourselves.

Maes provides a host of hypothetical examples of how a perfectly integrated AI system could change our lives if programs were created to expand our capabilities, and not just entertain us. She says that such technologies can fulfill many of our dreams. (She is rightly considered an IT mentor with a utopian bias - this attitude, along with other considerations, attracts ambitious students like Kapoor to the Massachusetts Institute of Technology). AlterEgo could teach us foreign languages, describing the surrounding reality with their means and in real time, or smooth out roughness in communication, suggesting names and basic information about the people we greet.

Then Maes, as if by signal, abruptly departs from the concept of pure fusion of minds proposed by Kapoor. If you provide channels for collecting physiological information (pulse, sweating, body temperature), the device could predict our behavior and quietly lead us to actions that will achieve our goals. He could catch that we are starting to doze off at work and begin to emit an invigorating smell of mint. He could correct our behavior by meeting an attempt to take the third cupcake with the smell of rotten eggs. He could determine that we are nervous, and turn to us with words of encouragement, inaudible to others. This development path is significantly different from what the Maes student offers - he is more focused on the formation of the desired behavior model and offers more opportunities for monetization. Maes seems to be leading to

It’s easy to imagine that in a few years Kapoor’s invention could turn into an idea that will bring billions, and what consequences this will entail for the defense industry and tech giants like Facebook and Amazon. Less obvious is the other: whose intellectual property is AlterEgo? Kapoor himself answers this question evasively. According to him, if he decided to leave the institute, he could take all the achievements with him, but at the moment he does not plan anything like that. He intends to stay in science and refine the invention, which, in his opinion, can benefit humanity, instead of just selling it to more expensive people. This is his brainchild, and he wants to go with him all the way to the end.

But what if someone coordinated his technical solutions, assemble his version of the device and create another startup unicorn without his participation? “I don’t know what to answer,” Kapoor says with an impenetrable face, shrugging his shoulders.

Also popular now: