Are Brain Computer Interfaces Really Our Future?

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We’re making real progress providing brain computer interfaces for patients paralyzed by ALS and spinal cord injury. Less invasive approaches are looking promising.

I saw an interview with Dr. Ben Rapoport, who’s a neurosurgeon and chief science officer of Precision Neuroscience. The company is one of several, like Elon Musk’s Neuralink, developing brain-computer interfaces. The BCI. The interview is centered around selling these brain implants as not as invasive as they sound. It started me thinking once again about whether it’s conceivable that these might actually be how we control computerized devices in the future.

Think about how effortlessly you type. Your intentions route directly to your fingers, bypassing speech entirely. I can type faster than I can talk because the motor pathway from intention to keyboard is so well-trained. But paralyzed patients can’t access those finger pathways—they’re injured or missing entirely.

So we invented assistive technologies. Eye tracking, head movements, sip-and-puff controls. But they’re painfully slow and cumbersome compared to natural motor output. Nothing like the fluid speed of touch typing or even voice control. That’s because they can’t use the natural motor pathways.

That’s where the BCI comes in for these paralyzed patients. It turns out that even though the spinal cord has been crushed in an accident, the patient’s motor control system in the cerebral cortex is still functioning, just disconnected from its output. We’ve started developing technologies to record that activity and use it to control a computer that completes the motor task.

Current brain computer interfaces

What’s the current state of these brain computer interfaces? Here’s where it gets interesting. There are different ways to eavesdrop on the brain, and the choice is an example of how when commercial interests turn science into a product, they begin to value practicality, cost, and acceptability alongside the technical value.

But first, let’s look at what the approaches have in common.

All of the current BCI approaches involve deploying electrodes used to pick up voltage fluctuations in the motor cortex. Just like we record the electrical activity of the beating heart with an electrocardiogram by putting electrodes on the chest, we can put electrodes on the scalp to measure voltage fluctuations across broad regions of the brain, an electroencephalogram. The EEG was the first method for detecting the brain’s electrical activity. Later, microelectrodes were developed that could be sunk into the substance of the brain to record the activity of single neurons. As we’ll see, these approaches are both being used for BCIs.

The EEG approach

Precision Neuroscience is developing a 1024 surface recording electrode embedded in flexible film that is to be positioned on the cortical surface for recording. Think of it like a super high resolution electroencephalogram that, rather than detecting big voltage swings broadly over the scalp, gives you the ability to record much smaller cortical potentials with much better localization. In fact, they call it the “Layer 7 cortical interface” because the neuroanatomists recognize 6 cortical layers. Since this new technology would serve as the new interface layer, they call it layer 7. The array is actually subdural, sitting on the arachnoid, not exposed to the CSF and the devastating risk of infection that carries.

This is actually just the latest refinement of this kind of surface recording technology, and Precision Neuroscience is currently the most visible and commercially viable of them. The technology came out of methods to monitor seizures with higher spatial resolution than scalp EEG by putting electrodes directly on the brain surface. In fact, Precision has been using their device in operating rooms over the last year for exactly this purpose and has demonstrated its resolution and practicality for short-term use. Next, they’ll be implanting the device for up to 30 days, presumably to start collecting the training data for use as a brain computer interface.

Most recently, this non-invasive trend has been extended by Synchron to put an electrode array through the venous system. The device sits in the blood vessels that run right over the motor cortex. It’s like getting a front-row seat to the neural activity without actually touching the brain tissue from inside the veins.

Let’s call Precision the EEG-style BCI.

The depth electrode approach

But EEGs at the surface could never be as detailed as direct recording of neurons deep in the brain. The other approach is more invasive, using depth electrodes instead of surface electrodes. To get down to the activity level of individual neurons. EEG electrodes and even the Precision surface electrodes are pretty big, able to record activity from the deep neuronal activity reflected by the potential change at the surface. If you want to record from single neurons, you have to make very small tipped electrodes and put them into the brain; you can record the voltage activity of individual neurons firing. My own PhD work involved this kind of single neuron recording. And this is the way we’ve traditionally thought about brain function: the neuron as a computational unit.

You can imagine that in working to figure out neural networks we needed to get ambitious and start recording from as many neurons at a time as possible. So over the years neuroscientists have developed electrode arrays that simultaneously record from dozens, even hundreds of neurons at a time. We’ve gotten pretty far with regional measures and recordings like EEG, MEG (magnetoencephalography), and fMRI (based on blood flow changes). But based on the tradition of single-cell recording, you would assume that if you really want to understand neuronal networks at a deep level, you’d want to listen into the individual neurons, the nodes in the network, to understand timing and correlation.

Elon Musk’s Neuralink device is this kind of depth electrode system, which uses very thin depth electrodes deep in the substance of the brain in an attempt to directly record from individual neurons or at least groups of neurons within deeper cortical layers. It’s a very sophisticated robotic system that uses a sewing machine-like device to place very thin fiber-like electrodes in an array into deep cortical layers. Not much is known about his animal studies or the results from his first patient, but presumably, the results are no different from the electrode arrays that have been used in animal studies and in patients in recent years.

Twenty years of progress

Academic groups have gotten BCIs to work remarkably well using both of these approaches. In fact, the first reported success was reported about twenty years ago by a group at Brown who enabled completely paralyzed spinal cord injury patients to control computer devices via implanted depth electrodes. Even though there’s no actual movement, motor intention still activates the motor cortex, and these signals can be translated through the recorded electrical activity.

And yes, you read that right. The Brown group has been showing videos of paralyzed patients manipulating objects with cortical implants, but it’s only now that it’s made it into the tech-VC narrative with dreams of consumer use. And for this, we have to give Elon Musk and Neuralink credit for transforming the tech into a moon-shot worthy frontier even if the science wasn’t new. Hey, I’m glad we’re getting as interested in real neural networks as in quantum mechanics, which has much less to do with our daily lives than brain mechanics.

But actually there has been some development over the last 20 years, since depth electrodes are so invasive. If all you needed to do was record the activation of part of the motor strip by movement intention, maybe there was no reason to actually record in the depths of the cortex. As we said, the potentials are easily detected at the surface. Why not use surface recording? The group at UCSF in 2016 used the surface electrodes that they had in place already for epilepsy surgery cases to record over the motor strip and were able to detect motor intention using it this time for speech decoding from intended syllables and words from silent articulation. Then they used the technique in ALS patients with severe loss of speech clarity due to weakness.

And we should recognize that our technology has caught up with the promise. Our computers are faster and more capable. Deep Neural Networks can be used to translate brain activity into computer commands, as we see with our voice recognition and LLM advancements in recent years. Plus, robotics are much better, allowing for fine physical control that would have been impossible twenty years ago.

Evesdropping on brain networks

The Precision BCI is an extension to chronic, high-resolution use of the intracranial EEG approach. Neuralink is still using the more invasive depth approach, I think with the idea that at the single neuron level, one could eventually have finer control and eventually, somehow, actually relay information into the brain through them. But I’m afraid that’s got real problems, as I’ll discuss in the next post.

The EEG approach is actually more in line with this focus in recent years on network activity underlying subjective experience. For now, I can lean on my previous discussions here about how brain function is best understood in the network and its connection strengths, not the function of individual neurons. We now understand that single-cell recording, even from ensembles of neurons across an area, provides a limited picture of how the brain, isolated in the skull, is making its predictions about the self in the external environment. Certainly, we’ve learned a great deal from single-cell electrophysiology in awake behaving animals about how individual brain areas perform their part in the overall picture, but it’s been fMRI that’s shown us big pictures of the function of areas across the brain like the Default Mode Network.

I suspect that the high-resolution EEG grids, perhaps spaced across brain areas, may provide a better big picture of intention. And this may lead to the ability to more deeply understand the mechanisms underlying both subjective experience and agency of our brains. And as envisioned by the tech-VC champions, could these hires EEG devices really become the next iPhone?


Prefer to follow via Substack? You can read this and future posts (and leave comments) by subscribing to On Deciding… Better on Substack: Brain, Self, and Mind

© 2025 James Vornov MD, PhD. This content is freely shareable with attribution. Please link to this page if quoting.

Author: James Vornov

I'm an MD, PhD Neurologist who left a successful academic career on the Faculty of The Johns Hopkins Medical School to develop new treatments in Biotech and Pharma. I became fascinated with how people actually make decisions based on the science of decision theory and emerging understanding of how the brain works to make decisions. My passion now is this deep explanation of what has been the realm of philosophy, psychology and self help but is now understood as brain function. By understanding our brains, I believe we can become happier, more successful people.