Why AI can’t overcome low-resolution BCI data

Why AI can’t overcome low-resolution BCI data

Why AI can’t overcome low-resolution BCI data

AI is only as smart as the signals it's working with

February 25, 2026

There’s an old saying in computer science: garbage in, garbage out. No matter how sophisticated a program may be, its output is only ever as good as its input. 

But recent advances in generative artificial intelligence (AI) seem to defy this logic. Even the simplest of prompts can generate stunning, complex outputs. Telling an AI to “write a story about a dog” produces a vivid plot full of details the prompter might not have thought of on their own. 

It seems like AI might not really need all that much data. But when it comes to brain-computer interfaces (BCIs), can AI truly overcome data quality limitations? And do we really want to rely on AI to be the main driving force behind our outputs – our thoughts and behavior?

In our last blog, we learned that we still need to create high-data-rate BCIs – despite the fact that human cognition and behavior outputs at 10 to 50 bits per second (bps) – because the human experience encompasses far more than just information transfer. (Need a refresh? Get caught up to speed here: Why Brain Speed Doesn’t Limit BCI Potential.)

As the fields of AI and BCI continue to coalesce, data quality becomes more valuable, not less. Below, we explore the fundamental limitations of today’s AI platforms, why data quality remains critical, and how the bottleneck for BCIs today is information over algorithms.

This is an abridged version of a technical blog by Paradromics CEO Matt Angle. For the full-length blog, visit Matt’s Substack.

In this article:

Why AI alone cannot unlock the brain

Fans of crime shows are likely familiar with the “ENHANCE IMAGE” button that swiftly turns blurry security footage into crystal clear imagery of the suspect and scenery, recreating specific details from next to nothing. 

AI might seem like it can make mountains out of molehills, but it cannot overcome fundamental physical and informational limits. When we ask an LLM to provide more information than was contained in the original prompt, we're essentially telling it to be creative or to hallucinate. 

But how do we know when AI is extracting deep hidden truths from the data versus simply fabricating plausible details? 

It comes down to understanding what AI models actually do – and what they can’t do.

What an AI model really is

Language models map relationships between words and predict how they fit together, but they don’t actually “think” about these things. Good models can recreate degraded inputs if enough information exists, but there are still hard limits.

Try taking a photo in a dark room with short exposure, and the photo will come out grainy, blurry, and hard to read. If you turn to an AI platform to fill in the details for you, it may be able to recreate certain missing information. But at its core, it’s still guessing based on patterns from prior data.

For example, let’s say there’s a sign in the photo. AI might be able to guess if a sign says “STOP” based on its shape and location, but what if that sign was electronic and displayed real-time traffic updates or lane closures? The AI will simply invent the details while you make a wrong turn and end up late.

Even models trained on large-scale datasets need a minimum amount of input data to function. Think of an actor – they can do incredible things onstage, but they still need a script. 

This brings us back to the question: if better AI can't compensate for weak data, what kind of data do BCIs actually need? The answer lies in understanding when a system is limited by its algorithm versus when it's limited by its data.

Model-limited vs. information-limited systems

Model-limited systems receive adequate input data to successfully perform the task, but the AI algorithm limits what someone can actually do with that data. How to tell: if another system with access to the same input data can complete the task better, that signals the system is model-limited.

Consider optical character recognition and facial recognition. Humans looking at high-quality digital images are able to recognize faces and read text with high degrees of accuracy while computerized systems had to catch up. While they improved over time, it's because of better modeling and training – not better data.

Information-limited systems struggle because the necessary information isn't there in the first place. You can't listen to the radio with a banana. You can't use a normal light microscope to see a virus without first modifying it. You can't pick out a single voice in a crowded stadium if you're across the street at a gas station. 

Even with the best AI tools, if the signal is too weak or too noisy, the system won't work.

Training vs. inference: why perfect models still need good data

When referencing information-limited systems, we mean input data during inference (what the model needs to generate a useful response) which is independent of training data quality (what the model learned during development).

Meta's EMG wristband illustrates this perfectly. The device reads electrical activity from forearm muscles. Meta trained a foundation model on thousands of users to enable "out of the box" gesture control with no individual calibration – a massive training achievement – but the device still does the same thing. 

It won't work for someone with paralysis (no muscle signal) and can't decode speech. Those goals aren't model-limited; they're information-limited by the device itself. 

Even a perfectly trained model can't extract information that was never there in the first place.

Why neural data is the true bottleneck

So if AI can't solve all our problems, where does the real limitation lie? In the data itself. Or more specifically, in how BCIs collect neural information.

BCI performance depends on two factors: coverage (which brain areas we record from) and density (how much information we extract from those areas).

Coverage

Different brain areas control different functions, and extracting the right information requires targeting the right area (motor cortex for movement, visual cortex for sight, etc.). Multi-modal BCIs that provide both control and sensation – such as a robotic hand with touch feedback – need access to multiple regions, not just one.

Density

As brain activity is highly localized, the density of recorded information is critical. You can’t always get more information about a given function by covering additional brain areas; what matters more is recording density in the right area. 

Increasing information density in BCI means increasing resolution, with the ultimate resolution being single neuron recording. Since neurons fire in unique patterns and have low correlations between individual cells, adding more electrodes proportionally increases bandwidth. 

However, not all electrodes can access individual neurons.

How electrode placement determines BCI performance

Intracortical electrodes are placed inside the brain, under the cortical surface and within 100 micrometers of neurons. At this proximity, they can detect individual neuron firing patterns: distinct, information-rich signals.

Surface and endovascular electrodes can only detect local field potentials (LFPs), which are aggregate signals from thousands of neurons blurred together. 

Think of it like microphones in a football stadium. Place one in the stands (intracortical) and another in the parking lot (surface electrodes). The microphone in the stands captures individual voices and conversations; the parking lot microphone hears only the crowd's roar.

Adding more microphones in the stands reveals new conversations. Adding more in the parking lot just re-samples the same noise. Adding more surface electrodes yields diminishing returns, limiting their usefulness for high-bandwidth applications.

And when it comes to BCI performance, the difference between accessing individual neurons versus aggregate signals simply can’t be ignored.

Intracortical BCIs

The most performant intracortical BCI, demonstrated at UC Davis, has achieved real-world communication at 30 to 60 words per minute (wpm) with minimal delay, open vocabularies (100,000+ words), and low word error rates (less than 5%).

Endovascular electrodes

For endovascular electrodes (placed inside blood vessels next to the blood-brain barrier), clinical trials show data rates below 1 bps, enabling only 3 to 4 wpm through specialized user interfaces. 

At 1 bps, generating outputs resembling normal speech (150 to 200 words per minute) is impossible without AI predicting nearly every word – stripping the user of control, agency, and accuracy.

ECoG-based BCIs

ECoG-based BCIs use surface electrodes that operate at marginally higher data rates and achieve higher communication speeds, but face constant tradeoffs between error rates, vocabulary size, and speed.

The highest performance ECoG demonstration (from researchers at UCSF) used a small vocabulary of ~1,000 words, similar to a tourist who only had time to read part of "German Phrases for Dummies" on their flight to Germany. 

The data were so limited that researchers had to introduce multi-second data integration delays to achieve 25% word error rates in a highly controlled task.

No amount of better AI training will make endovascular or surface electrodes perform like an intracortical array. The limitation is the data, not the algorithm.

And when low-bandwidth systems rely on AI to fill the gaps, users lose something critical: their own voice.

The agency problem

This brings us to the most important point: With AI alone, there's no agency. Sure, you could turn to an AI platform to impersonate you, check in on your family and friends, and even encapsulate your personal tone of voice, “Congrats on the lil guy, my man! So happy for your fam.” 

But delegating your social life to AI comes with the sacrifice of genuine human connection. Your loved ones would be talking to a model – a shell of yourself – not the person they’ve known and loved for years.

Preserving agency is what makes a BCI feel like an extension of ourselves versus an external disembodied agent that runs life for us. And remember from last time: preserving that agency requires a reasonable information throughput.

If a BCI system only reliably detects "close hand" or "kick leg," but gets asked to provide text outputs, one of two things happens: it's either painstakingly slow, or it fills in the vast majority of text from its AI imagination.

When performance hinges on having adequate data versus having a better algorithm, that's the distinction between information-limited and model-limited systems. For BCIs, that distinction determines whether users express their own thoughts or let AI speak for them.

What this means for the future of BCIs

While information-limited BCIs can benefit from AI pairings, they will never match the performance of an intracortical system.

In the absence of data, applications must lean on the formulaic creativity and hallucinations of a generative AI platform rather than the agency, autonomy, and intent of the human being behind it all.

At Paradromics, we’re constantly working toward a future where agency is never sacrificed. In our current benchmarking tests in sheep auditory cortex, Paradromics has achieved data transfer rates exceeding 200 bps. 

While comparing data rates across different tasks is difficult, this rate is significant, as it shows our hardware-limited data ceiling is well above any previously demonstrated BCI trial. 

Linguists estimate that spoken language transmits information at roughly 10 to 50 bps across languages. By accounting for sampling inefficiencies and incorporating pitch, loudness, tone, and flow, we're approaching the threshold where BCIs can achieve the speed of natural speech without major tradeoffs.

The path forward means recognizing that better algorithms can't overcome fundamental limitations in what gets recorded in the first place.