Why brain speed doesn’t limit BCI potential

Why brain speed doesn’t limit BCI potential

Why brain speed doesn’t limit BCI potential

The case for high-data-rate BCIs

February 10, 2026

When considering the future success of brain-computer interfaces (BCIs), a fundamental question arises: How much neural data does a BCI need to capture from the brain in order to accurately decode what someone is trying to say or do? 

A 2024 research paper from Caltech neuroscientists Jieyu Zheng and Markus Meister proposes that the answer is surprisingly little – and offers a criticism that the field of BCI might be overengineering its solutions.

They introduce an unnerving paradox of human behavior: While human sensory systems absorb information at approximately 1 billion bits-per-second (bps), then actual human behavioral output operates at only 10 bps. If we accept these findings, we might also accept that simple voice-based interfaces are preferable to more invasive, high bandwidth neural technology.

However, this conclusion rests upon a critical assumption: that 10 bps captures everything meaningful about what it truly means to be human.

For example, one of the most complex things that humans do is speak. Even the simplest of sentences requires the coordination of over 100 muscles across your lips, tongue, jaw, larynx, and diaphragm. Your brain combs through tens of thousands of words, applies grammatical rules in real time, and continuously monitors your listener’s reactions. 

Yet when researchers measure the amount of information actually being transmitted –  in this case, the words being used – speech requires only about 40 bps. 

And this isn’t unique to speech. Reading, typing, or even doing something more mentally complex like solving a Rubik’s cube or playing a game of chess all seem to plateau around 10 to 50 bps.

So now, we have this massive neural web of machinery that produces a considerably low informational output. Billions of neurons work together in harmony, but we can only consciously process about one thing at a time.

So what are our brains doing with all this data – and what does this mean for the future of BCIs?

Key findings:

  • Human behavioral output: 10 bits per second
  • Motor control requirements: 465 bps for joint coordination
  • BCI goal: Restore experience, not just function
  • Intracortical recording provides highest data rate access

In this article:

  • Outer vs. Inner Brain: Understanding the Information Bottleneck
  • Why Motor Control Complicates the Picture
  • The Carnival Game: Task Completion vs. Human Experience
  • Why This Matters for BCIs
  • Building for the Human Experience, Not Just Function

What is the brain’s information bottleneck?

Zheng and Meister propose that the brain operates in two distinct modes: an “outer brain” that rapidly intakes high-dimensional sensory and motor cues, and an “inner brain” that carves this down into the few bits of information we actually require to guide our behavior. 

Think of your mind like an hourglass: sensory data pours in at the top, conscious decisions happen in the narrow middle (the famous 10 bps), then motor commands fan out at the bottom.

It makes sense that the outer brain needs billions of neurons and enormous computational resources… but why would we need such a wide, interconnected web of neural mechanisms to produce only 10 bps?

This distinction between inner brain and outer brain creates a somewhat artificial separation. Consciousness does not reside in a single brain area; it’s likely an emergent property of the entire distributed system. 

The sensory inputs feeding our decisions are an active part of our conscious experiences. Complex actions involve continuous feedback loops between sensory and motor systems, and this high-dimensional process is itself part of our rich conscious experience.

How the inner vs. outer brain relates to BCIs

Zheng and Meister use this inner/outer brain distinction to question the direction of BCIs. If the inner brain is truly incapable of handling more than 10bps, why are companies working to establish high data-rate interfaces – especially when simpler solutions might suffice?

After all, using a language-based interface to restore communication is a lot less invasive than an implant. 

While Zheng and Meister offer a compelling argument, it depends entirely on one claim: that 10 bps captures the true meaning and full range of the human experience.

Why motor control complicates the picture

The problem with this framework becomes more apparent when considering movement. The human body has 14 major joint groups (shoulders, elbows, wrists, hips, knees, ankles, spine, and neck) with only 10 possible positions each. 

When taking samples of movement at 10 hertz (Hz) – meaning taking a snapshot of information 10 times every second to track how something is changing – we get about 465 bps, which is almost 50 times more than the 10 bps measured by Zheng and Meister.

And if our brains truly operated at only 10 bps, we wouldn't be able to manage the complex, simultaneous information processing that everyday activities require. 

Take a piano player, for example: while muscle memory handles much of the fine motor execution, the brain is simultaneously reading sheet music, making interpretive decisions about dynamics and timing, monitoring the sound produced, and coordinating the overall performance – all while maintaining posture and foot pedal control. 

This multi-stream processing alone far exceeds 10 bits per second.

Every day, we perform multiple simultaneous movements that each require precise, continuous control of many joints. If our conscious decision-making really operates at only 10 bps, we shouldn’t be able to do even the simplest of daily tasks.

There’s something here that Zheng and Meister’s framework didn’t account for, and it’s the difference between completing a task and experiencing it.

Task completion vs. the human experience: The carnival game example

To help bring this into context, consider a thought experiment that illustrates the gap between the transfer of information and the human experiencing it.

Let’s say you’re playing a carnival game called CAT DOG BASKETBALL. You’re standing 10 feet away from two different basketball hoops while pictures flash on the screen, and a ball return continuously feeds you basketballs. If a cat appears, you’re supposed to shoot left; if a dog appears, shoot right. 

This is what the field of information theory refers to as a binary discrimination task. The “inner brain” processes one bit of information per image: cat or dog, left or right. 

These simple, decision-based exercises are utilized by researchers like Zheng and Meister to measure cognitive throughput (re: the rate at which our brains consciously process information and make decisions) – and they also form the basis of arguments that BCIs only need to decode simple commands.

But in reality, these experiences are far richer. For example, you don’t know whether a cat or a dog will appear, or whether it’s just going to be a standalone image of a cat or dog or embedded in a more complex visual scene. Your attention constantly switches between the screen and the hoops while your visual system works to process each depiction in full color and complete detail.

Once you’ve made your decision on where to shoot the ball, your brain kickstarts a motor program consisting of visual information, proprioception (re: your body’s sense of where your own body parts are relative to the space and the positioning they’re in), and touch. All the while, your cerebellum continuously compares your intended movements with your actual movements to make sure the ball reaches its destination.

Zheng and Meister don’t account for this sensory-motor feedback loop within their framework. But we know it’s a large part of the human experience, and it operates at a much higher rate than just 10 bps.

Now let’s say you’re someone living with visual or motor impairments. This time, there’s a camera watching the screen, and when the game starts, a computer voice reads out, “It’s a cat.”

“Left,” you say, and a robotic arm shoots for you.

Functionally, you’re still playing the riveting game of CAT DOG BASKETBALL. The task is completed; the bit of information is transmitted. However, the true nature of the game itself – the embodiment, the challenge, and all the fun – has been completely lost.

Why this matters for BCIs

Where the topic of vision restoration is concerned, Zheng and Meister suggest conveying “only the important results of visual processing, such as the identity and location of objects and people in the scene . . . using natural language.”

In other words, why show someone a visual image when you could simply describe what’s there?

Because reading a description of the Grand Canyon is not the same as seeing it. Telling a robot to play golf for you is not as much fun as playing the game itself, and reading about delicious food is actually downright frustrating when you’re unable to taste it.

For motor control, Zheng and Meister offered that paralyzed individuals could move their exoskeletons “with a few high-level voice commands. If the robot thinks along by predicting the user’s most likely requests, this communication will require only a few words (‘Siri: sip beer’).”

Using an AI copilot is certainly an attractive option for minimizing surgical risk while maintaining simplicity, but it often comes at the cost of agency – and completely misses what people really want to see from BCIs.

People who are living with impairments want to reclaim autonomy, feel embodied in the world that surrounds them, and act as the conductor of their own journey rather than just a passenger watching through a window. 

They do not just want to complete simple tasks. They want the full experience of all life has to offer.

The broad promise of BCI is not only to restore basic function to individuals with severe disabilities, but to open a range of new applications (medical and non-medical) based on seamless technology integration. While short-term solutions for assistive care are valuable pursuits, they stop short of the larger ambitions of BCI.

And even if our conscious decision-making process maxes out around 10 bps, life is not experienced at such a slow rate – and we shouldn’t reduce our sensory and motor systems to match.

Designing BCIs around that 10 bps limit would mean running the risk of sacrificing everything that makes life worth living.

Why BCIs must prioritize human experience over function

This debate reveals something crucial for BCI design: There may be optimal points within the brain we can access that mimic the richness of the human experience with less data than initially thought.

Even if our cognitive throughput plateaus at only 10 bps, this doesn’t mean high-data-rate BCIs are unnecessary. Human experience includes so much more than just conscious decision-making alone. 

Consider why glasses, hearing aids, high-resolution displays, and broadband internet all exist as products. It’s because we perceive limitations at data rates far exceeding 100 bps, and we simply won’t accept having our sensory or motor capabilities reduced to make up for it.

With the convergence of high-resolution neural interfaces, advanced signal processing, and AI-enabled control systems, patients are able to achieve much richer experiences with fewer tradeoffs. 

Intracortical recording (placing electrodes inside the cortex to directly measure neural activity) offers the most direct route to higher data rates. This approach already sets the performance benchmark, with advantages rooted in fundamental physics that will persist.

Paradromics builds BCIs that enhance human experience beyond basic functional restoration. We seek to preserve not just the ability to act, but the sensory richness, motor agency, and embodied presence that make the slowness of our lives worth savoring.

Become part of our community here or learn more about the Connexus BCI here.