Skip to main content

Facebook is giving up on brain-typing as an AR glasses interface

Facebook is giving up on brain-typing as an AR glasses interface

/

New research could still help with speech impairment

Share this story

Facebook’s prototype brain-computer interface headset.
Facebook’s prototype brain-computer interface headset.
Image: Facebook Reality Labs

A Facebook-backed initiative aiming to let people type by thinking has concluded with new findings published today.

Project Steno was a multi-year collaboration between Facebook and the University of California San Francisco’s Chang Lab, aiming to create a system that translates brain activity into words. A new research paper, published in The New England Journal of Medicine, shows potential for implementing the technology for people with speech impairments.

But alongside the research, Facebook made clear it is backing off the idea of a commercial head-mounted brain-reading device, and building out wrist-worn interfaces instead. The new research has no clear applicability for a mass-market tech product, and in a press release, Facebook said it is “refocusing” its priorities away from head-mounted brain-computer interfaces.

“Facebook has no interest in developing products that require implanted electrodes”

“To be clear, Facebook has no interest in developing products that require implanted electrodes,” Facebook said in a press release. Elsewhere in the release, it noted that “while we still believe in the long-term potential of head-mounted optical BCI technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market.”

The Chang Lab’s ongoing research involves using implanted brain-computer interfaces (BCIs) to restore people’s speech abilities. The new paper focuses on a participant who lost his ability to speak after a stroke over 16 years ago. The lab fitted the man with implanted electrodes that could detect brain activity. The man then spent 22 hours (spread over more than a year’s worth of sessions) training a system to recognize specific patterns. In that training, he would attempt to speak isolated words from a 50-word vocabulary set. In another training course, he tried to produce full sentences using those words, which included basic verbs and pronouns (like “am” and “I”) as well as specific helpful nouns (like “glasses” and “computer”) and commands (like “yes” and “no”).

The system decodes brain patterns for 50 words

This training helped create a language model that could respond when the man was thinking about saying particular words, even if he couldn’t actually speak them. Researchers fine-tuned the model to predict which of the 50 words he was thinking about, integrating a probability system for English language patterns similar to a predictive smartphone keyboard. The researchers reported that in final trials, the system could decode a median rate of 15.2 words per minute, counting errors, or 12.5 words per minute with only correctly decoded words.

The Chang Lab published earlier Project Steno research in 2019 and 2020, showing that electrode arrays and predictive models can create comparatively fast and sophisticated thought-typing systems. Many previous typing options involved mentally pushing a cursor around an on-screen keyboard using a brain implant, although some other researchers have experimented with methods like visualizing handwriting. Where the lab’s earlier research involved decoding brain activity in people who were talking normally, this latest paper demonstrates that it works even when subjects don’t (and can’t) speak aloud.

The Facebook Reality Labs headset, which was not used in the study.
The Facebook Reality Labs headset, which was not used in the study.

In a press release, UCSF neurosurgery chair Eddie Chang says the next step is to improve the system and test it with more people. “On the hardware side, we need to build systems that have higher data resolution to record more information from the brain, and more quickly. On the algorithm side, we need to have systems that can translate these very complex signals from the brain into spoken words, not text but actually oral, audible spoken words.” One major priority, Chang says, is greatly expanding the vocabulary.

Facebook will focus on wrist-mounted EMG bands

Today’s research is valuable for people who aren’t served by keyboards and other existing interfaces, since even a limited vocabulary can help them communicate more easily. But it falls far short of the ambitious goal Facebook set in 2017: a non-invasive BCI system that would let people type 100 words per minute, comparable to the upper speeds they could reach on a traditional keyboard. The latest UCSF research involves implanted tech and doesn’t come close to hitting that number — or even the speeds most people can reach on a phone keyboard. That bodes ill for the commercial prospects of a tech like an external headband that optically measures brain oxygen levels, which Facebook Reality Labs (the company’s virtual and augmented reality hardware wing) unveiled in prototype form.

Since then, Facebook acquired electromyography (EMG) wristband company CTRL-Labs in 2019, giving it an alternate control option for AR and VR. “We’re still in the early stages of unlocking the potential of wrist-based electromyography (EMG), but we believe it will be the core input for AR glasses, and applying what we’ve learned about BCI will help us get there faster,” says Facebook Reality Labs research director Sean Keller. Facebook won’t completely give up on the head-mounted brain interface system, but it’s planning to make the software open-source and share the hardware prototypes with outside researchers, while winding down its own research.