Elon Musk isn't the only one who wants us to communicate via brainwaves. Facebook also has ambitious plans to interface with computers using wearables and one day let us type rapidly with our brains. Now, neuroscientists from the University of California, San Francisco (backed by Facebook's Reality Labs) have demonstrated a system that can translate speech into text in real time using brain activity only. While impressive, it shows that the technology still has a long ways to go.
Brain-computer interface systems already exist, but require users to mentally select a letter at a time on a virtual keyboard, a process that tends to be very slow. The UCSF researchers, however, tried to use context to help the machines translate entire words and phrases.
Researchers implanted electrical arrays on the surfaces of the brains of volunteer epilepsy patients. They were placed in regions of the brain associated with both speech and comprehension.
Subjects responded out loud to multiple choice questions, like "From zero to 10, how comfortable are you?" or "How is your room currently?" Using brain electrical activity only, the system would then guess when a question was being asked and what it was, and from that, determine the subject's answer.
If it first figured out which question was being asked, it could winnow down the possible set of responses. As a result, it produced results that were 61 to 76 percent accurate, compared to 7 to 20 percent expected by guessing.
"Here we show the value of decoding both sides of a conversation -- both the questions someone hears and what they say in response," lead author Prof. Edward Chang said in a statement.
The experiment produced positive results, but showed the current limitations of the tech. The electrical arrays, while less intrusive than probes used for other brain-interface experiments, still needed to be implanted in subjects who were about to undergo epilepsy surgery. And rather than merely thinking the responses, they were saying them out loud.
To top it off, the range of nine question and 24 responses was very limited. All of that is a far cry from Facebook's stated goal of 100 word-per-minute random speech translation using passive wearable devices.
Facebook believes that even the limited capacity could be powerful, though. "Being able to decode even just a handful of imagined words -- like 'select' or 'delete' -- would provide entirely new ways of interacting with today's VR systems and tomorrow's AR glasses," said the company in a post.
Naturally, folks might be concerned about giving Facebook (of all companies) direct access to our brains. However, Reality Labs Rearcher Director Mark Chevillet tried to address such concerns in Facebook's post on the subject.
"We can't anticipate or solve all of the ethical issues associated with this technology on our own," he said in a statement. "Neuroethical design is one of our program's key pillars -- we want to be transparent about what we're working on so that people can tell us their concerns about this technology." I'm sure you will, in the comments below.