NEUROTECH PUB

Episode 5 – A Lawyer, a Philosopher, and Two Neurologists Walk Into a Bar…

Apr 6, 2021

Episode 5 – A Lawyer, a Philosopher, and Two Neurologists Walk Into a Bar…


A Lawyer, a Philosopher, and Two Neurologists Walk Into a Bar…

In this episode we discuss ethical considerations around brain-computer interfaces. Our guests are Tim Brown, Leigh Hochberg, Sydney Cash, and Amanda Pustilnik. A central theme in the discussions will be how neuroethics differs from traditional medical ethics and bioethics, and what we can draw from other fields and experiences to prepare for a world where BCI is more prevalent and more powerful.

– Matt Angle, CEO, Paradromics

00:15 | Guest Introductions

01:00 | Innately-Held Unproven Moral Beliefs
02:14 | Pain Program at MGH Center for Law, Brain, and Behavior
02:27 | Dr. Brown‘s Agency and Brain Machine Interfaces NIH Research
02:48 | Center for Neurotechnology
03:46 | Six Impossible Things Before Breakfast
04:24 | Dr. Hochberg and Braingate
05:10 | Dr. Cash’s Cortical Physiology Lab

06:31 | Neuroethics: A Field of Its Own
10:52 | Happy Pills in 80s and 90s America
11:13 | Micromarketing
12:29 | Braingate BCI
14:47 | Bizarre Brain-Implant Experiment Sought to “Cure” Homosexuality

16:57 | Device vs Pharmacological Brain Therapies
24:10 | DBS for Essential Tremor

21:01 | When Patients and Clinicians Don’t See Eye-to-Eye
21:19 | Vanessa Tolosa and Mavato Engineering
33:38 | Lane v. Candura
36:18 | Kendra’s Law

41:11 | Researchers’ Burden in Equitable BCI Dissemination
41:26 | Against Mandatory Neurointerventions
41:36 | Another Perspective on Mandatory Neurointerventions
41:56 | Barbaric Treatment of Alan Turing
43:11 | Vikash Gilja at UC San Diego
45:07 | NIH BRAIN Initiative Neuroethics Working Group

51:05 | Data and Privacy in a BCI World
51:10 | Chethan Pandarinath at Emory University
53:22 | Observe, Imagine, Attempt
59:34 | Health Rhythms
59:50 | Neosensory
1:03:10 | Neurotechnology and Mood
1:05:10 | Global Brain Data Foundation

1:06:04 | Legal Brain Data Protections, or Lack Thereof
1:07:30 | What does the fifth amendment protect?
1:08:11 | Schmerber v. California, 384 U.S. 757 (1966)
1:08:58 | fNIRS for detection of Cannabis Intoxication
1:10:24 | Rochin v. California, 342 U.S. 165 (1952)
1:10:53 | Winston v. Lee, 470 U.S. 753 (1985)
1:10:39 | Kyllo v. United States, 533 U.S. 27 (2001)
1:13:16 | The Genetic Information Nondiscrimination Act of 2008

1:22:20 | Should BCI Eradicate Disability

1:35:36 | Balancing Near-Term Utility and Long-Term Harms

Listen and Subscribe

Read The Transcript

Matt Angle:
Welcome back to Neurotech Pub. Today, we’ll be talking about the ethical considerations surrounding brain-computer interfaces, who gets them, who owns the data and how can we extrapolate current trends and norms to anticipate the challenges and opportunities presented by BCI. I’m joined today by Tim Brown, a postdoctoral researcher in philosophy at the University of Washington, Leigh Hochberg and Sydney Cash, neurologists at Mass General Hospital, and Amanda Pustilnik, a professor of law at the University of Maryland. This discussion was a pleasure to moderate, and I think you’re really going to enjoy it.

Matt Angle:
So just to get started, I was hoping everyone could go around and introduce themselves. And then just to kick this off, I think because we’re talking about ethics and by extension, a little bit of philosophy, I think it’d be really interesting if everyone shared one thing that they believe or assert is true without evidence, some truth that you hold to be self-evident that guides a lot of your own ethical decision-making. For instance, I would say, hi, I’m Matt, I’m the CEO of Paradromics and I believe that there’s something special about human life. I don’t have any factual basis for saying that a human life or human happiness is more important than the happiness or life of a giraffe. It’s just something that I believe, and it guides a lot of my ethical framework when I look at BCI. Anyone else like to share?

Amanda Pustilnik:
Okay, I’ve got a weird one, but it’s on point for this. I believe that our brains are not individual organs located inside our bodies. I believe that if we understand them properly, we’re a superorganism of connected brains and that when we try to think of ourselves as individuals or try to create flourishing as individuals, we are not just in for a hard time, we are trying to do something that may biologically be bad for us.

Matt Angle:
Oh, can you also introduce yourself?

Amanda Pustilnik:
That wasn’t enough? I’m sorry. My name is Amanda Pustilnik. I’m a professor of law at the University of Maryland and the director of the program on pain at the Center for Law, Brain & Behavior at Mass General Hospital. And, I’m thrilled to be here and meeting all of you, so thank you.

Timothy Brown:
My name is Timothy Brown. I’m a postdoctoral scholar in the philosophy department at the University of Washington. I work on an NIH, R01 funded project on agency and brain-machine interfaces. And I’m also a long-time contributor to the Center for Neurotechnology at University of Washington, where I did a lot of ethics, engagement, outreach type stuff, which was a lot of fun. And okay, so what is a self-evident moral commitment that I have? I don’t know if this is going to be self-evident and I think that’s what’s controversial about it, but I do believe that there are systems of oppression that we have not uncovered. And, those systems of oppression guide our research activities, our medical practices, and our consumer practices, and unless we uncover those systems of oppression, it’s going to be hard to move forward in the ways that we want to.

Amanda Pustilnik:
I know we’re not supposed to be debating yet, but I disagree with you in that I think that’s just evidence-based. I mean, I think you can just build a real case for that. It’s not like you’re asking us to believe what was it, Lewis Carroll said, “Six impossible things before breakfast?”

Timothy Brown:
Yeah.

Matt Angle:
But, I guess what he asserted is that there are even uncovered and unknown systems of oppression, so that’s distinct from the ones that are known and well-documented.

Amanda Pustilnik:
Fair enough, systems below systems.

Timothy Brown:
Yes. It turns into a kind of excavation and we haven’t unearthed even, well, we really haven’t even scratched the surface.

Leigh Hochberg:
Actually, can I make a motion that we just listen to Amanda and Tim for the next couple hours? I think that’s, I want to keep this going.

Sydney Cash:
Yeah, Leigh, I don’t know about you, but I feel like I’m a fish out of water here.

Leigh Hochberg:
Yeah, I hear my pager going off, excuse me. I’m Leigh Hochberg, it’s great to be here. I direct the BrainGate pilot clinical trials and consortium. Together with Syd, direct the Center for Neurotechnology and Neurorecovery at Mass General. Also, a professor of engineering at Brown, and direct the VA R&D Center for Neurorestoration and Neurotechnology at the Providence VA. And in terms of without evidence, but deeply held beliefs, I think there’s a lot more information embedded in the neural activity that we can record today than we even know to think to look for, and that, without-evidence belief drives a lot of my kind of ethical thought process with respect to neural interfaces.

Sydney Cash:
Hi, I’m Sydney Cash. I’m also at Mass General Hospital. I’m an epileptologist in the Department of Neurology where I’m Associate Professor. Like Leigh said, I co-direct the Center for Neurotechnology and Neurorecovery. I spent a lot of my research time looking at intracranial recordings from patients who are in the hospital for clinical reasons but yet are participating in various research endeavors. And I don’t have quite as imaginative and exciting thoughts on empirically or lack of empirically evidenced approaches or thoughts about the brain, but in terms of the ethics of what we do in BCI, I think that, and while there might be some evidence for this, I think the degree to which people and patients of all sorts are actually willing and interested in participating in investigations, I’m intentionally not saying clinical trials or things like that, but the degree to which they want to understand what’s happening with them and with others is far larger, far less tapped and resourced than I think we really understand or manage in the clinical world anyway.

Matt Angle:
Tim, when I first heard about neuroethics, I have to admit my knee-jerk reaction was a little bit skeptical. And one of the things that I wondered, and I think a lot of BCI practitioners and other neurotechnologists wonder, is how is neuroethics distinct from bioethics, distinct from medical ethics, distinct from kind of theory of mind? What merits this new phrase, and this as a distinct discipline of study?

Timothy Brown:
I think that’s a very difficult question. I think it’s akin to the kind of question about neuroengineering. You could ask the same question about neuroengineering. How is it distinct from neuroscience? How is it distinct from biotechnology in general and why does it deserve its own lab space? And, I think the answer will be similar. I think that neurotechnologies present unique, but related moral issues that have a place in a medical ethics discussion, have a place in a bioethics discussion, have a place in a technology ethics discussion, but that’s not where their home is. So, what are the similarities between the controversies that we see with genetic testing in a variety of contexts like consumer genetic testing, prenatal genetic testing, they aren’t the same as the ones that we run into for the collection of neural data, but they are related.

Timothy Brown:
We have like three companies that are putting together web services and cloud data storage services. And, Amazon is one of them and we know that Amazon is something of a weird player. I’m being a little bit cagey here, but we see the same things happening with data collection for neurotechnology or neuroscientific purposes. But the significance of that data is different, right? There’s something ancestral about genetic information necessarily, but there’s something related to a person’s personality, or everyday activity that you could tap into when it comes to neural data. So the reasons why they’d be salient to the average person who’s trying to make consumer or medical or research decisions are just going to be different. So I guess the short answer is it’s complicated, but neurotechnology, I think presents its own set of unique issues.

Matt Angle:
Yeah, it’s interesting, in analogy to neuroengineering if someone said, why does neural engineering exist as its own subdiscipline, I might say it’s largely an intersectional issue that there are… But I wonder if neuroethics doesn’t also have some, does it have some truly new issues that are raised, or do you think it is more similar to neuroengineering? Do you think that there’s sort of substantively new things that come out of the field of neuroethics?

Timothy Brown:
I think so. I think there are substantively new things that do come out of neuroethics and they, in similar ways to neuroengineering, require multi or transdisciplinary work. So, the idea that we’re going to do certain kinds of biometrics, look for biomarkers of certain conditions, that’s not a new concept, but when we talk about it in the context of psychiatric treatment, automated psychiatric treatment, we get some issues that sort of look like the issues we ran into with Prozac in the seventies, but they take on a life of their own.

Timothy Brown:
We were worried about, for example, and to use another analogy, we were worried about advertising and television in the eighties and nineties, but we’re even more worried about micro-advertising in the age of Facebook and Twitter. There’s a certain added problem that comes with this deep profiling that you see in this kind of technology. And I think that the problems that we see in neurotechnology, and the problems that I’m worried about in particular that arise from bi-directional brain-computer interfaces, so systems that read and stimulate are, I think unprecedented. Imagine having a doctor in your head making decisions about you, that are encapsulated biases from societal forces. Certainly, we’d be worried if doctors gave people treatments that were based on their biases, and with COVID-19 and Black Lives Matter, there’s a hyper-awareness of that. But, imagine all of that just in a device in your head and we call it Neuralink or something like this.

Matt Angle:
We don’t call it Neuralink.

Timothy Brown:
Well, we don’t. I wasn’t going to say call it BrainGate because I have more faith in Syd and Leigh. But yeah, I think there are unique issues here. And, I think that we have to exercise a little bit of moral imagination to get there. And, I think we haven’t really scratched the surface yet and it’s going to take a huge transdisciplinary effort. And, I think we’ll get there eventually.

Leigh Hochberg:
Yeah, Tim, I think you alluded in the many great points to one important one of many important ones too, when you made the comparison to the bioethical thought that’s gone into the depth of information that exists in the genetic code. And so, there’s been a lot of biomedical ethics that has developed because of the great progress that we’ve made in genetics, and then he commented that there is, and I agree that there’s a difference, there’s some important difference when we try to apply some of those same concepts to some at least of what’s being developed in neuroengineering.

Leigh Hochberg:
And a lot of that, I think has to do with time, that there’s nothing in my genetic code that can tell either me or anybody else that my hand is about to reach for my coffee cup. I think that statement is true, but it takes a tiny, tiny bit of neural data to know that that’s what I’m going to do in 20 milliseconds. And the immediacy of that information, the fact that I’m reaching for my coffee cup, isn’t terribly relevant, but the speed and the immediacy, the presence of those data and our ability to understand them, to act on them, and as you begin to describe to potentially intervene on them, I think really highlights, one place where neuroethics has a lot to teach us.

Sydney Cash:
And, I think the other major point that Tim alluded to is the bi-directional nature that we’re heading towards. Getting the information out raises all sorts of thorny issues, no doubt. And then when you start to put information in, which changes things, then you get into even thornier issues. And, we’ve been there before, we’ve been there with psychosurgery, we’ve been there with stimulation for homosexuality. We’ve been in these kinds of ugly areas in some ways, and we’re getting nominally better and better at it, which is raising more and more questions about what’s appropriate to do and what’s not. I mean, Leigh and I focus on very clear restoration of deficits, restoration of dysfunction. There’s not a lot of real questions here, but it’s very easy to go from there to places where there are questions, changing people and who they are.

Sydney Cash:
And I think that’s what really separates, as Matt, I think you laid out in your opening, right? The brain puts us in a whole different territory, right? It’s what’s unique about us in a way that’s much more so than our liver or our heart or our lung. And, once you start saying we’re going to be able to mess with that in very particular ways, the ethical issues become really large and largely unexplored I would say. Tim and Amanda might say otherwise, but I think they wouldn’t, but I think that’s really what it gets down to it. We’re changing, we’re talking about what makes a person, a person.

Amanda Pustilnik:
Yeah, there’s a recursive aspect to neuroethics that’s absent in the others, because the other areas of ethics and philosophy are predicated on the construct of a deciding self. It’s a bit of an artificial construct, both the self and the idea that we have free choice, but there is this tenable construct of the deciding self that is affected by these other domains. And then in neuroethics, it’s about intervening in and shaping and influencing that very deciding self that is taken as the basis for all of the other ones. So it’s not that neuroethics is necessarily foundational, but it’s more than just intersectional when it gets into the issues of self that well, Tim, Leigh, and Syd were talking about.

Matt Angle:
A lot of the questions raised in neuroethics existed in an age of pharmacology when mental health was being treated by small molecules, why is it intensified in the sort of new age of BCI and devices? What do we see as different about, I’m curious both from a clinical standpoint and from maybe a more philosophical standpoint, why do we see devices as being distinctly different from molecules? Is it just that they could be more powerful or do we see another important distinction?

Sydney Cash:
There’s a bunch of things I think that are different there. I think you raised a couple of already, precision is one of them. In theory, and in practice, it’s a much more focused approach. And, I think that’s a big issue with it, and there’s others as well.

Leigh Hochberg:
It’s focused and it’s personalized. Part of that precision that one of my teachers in med school in describing most types of pharmacology and particularly neuropharmacology said that if your car has run out of gas, you don’t pour the gas all over the vehicle in order to charge it back up, you try to put the gas in the tank. And I thought it was a great description, maybe a little bit too grandiose, but it was a great description for the challenges of using pharmacology, which undoubtedly has been incredibly effective for many people with neuropsychiatric disorders. But, when we’re talking about neurotechnologies that have the ability to record from individual neurons to stimulate potentially from ensembles of neurons in a spatial and temporal way, that it will not be matched by pharmacology, that really opens up both tremendous potential and all kinds of interesting ethical questions.

Matt Angle:
And from a prescribing point of view, clinicians have historically been much more ready to try new medications rather than to undergo surgery or use a device. What do you see as the guiding principle there? And, is that in flux or where do you see this going in the next 10 years?

Leigh Hochberg:
So, Syd and I are looking at each other on zoom here, as we try to decide who can lure the other one to jump in and answer first.

Sydney Cash:
So, I think reversibility is probably the major, one of the major differences. In theory, many pharmacological interventions are reversible. You stop the drug, you stop the drug. That’s not entirely true, obviously, but as a first approximation, it’s reasonable. And I think that’s how a lot of prescribers, clinicians, and caregivers think about it. And historically speaking, I think there’s a different field, certainly for surgery, and even for devices putting it in is seen as at least somewhat permanent. Now that’s also not entirely true. Many of the devices already being used can be turned off, can be explanted, but it’s sort of seen as a bigger step.

Sydney Cash:
I think that is one at least sort of a cultural difference that slowly, it is slowly being overcome. I can tell you in the realm of responsive neurostimulators for epilepsy, a larger and larger group of clinicians, these are devices that are used to stop and control seizures in patients who have intractable epilepsy, that is epilepsy that’s not controlled by medications alone, and more and more, it’s sort of being thought of as another step, another piece in the therapeutic bin that one could use, even though it might not be the ultimate approach, you could try and use it, get some more information and then decide from there. That’s a bit of a change, I think, from where we were, where we think of devices, you only do that as an absolute last resort.

Matt Angle:
Along that lines, I’m going to be playing a few clips for you from various people who have come on the podcast. And, I’ve asked them if they had sort of ethical questions to ask at our ethics meeting, and one of the questions that relates to kind of prescribing and to kind of patient autonomy and the ability to get devices, this question comes from Vanessa Tolosa, who is asking about when does the patient get to decide?

Vanessa Tolosa:
Yeah, I didn’t know if this has already been answered in the medical field in other applications, but how much does the patient have a say in whether they get a device implanted or not, a patient or someone that speaks for them? So, say it kind of relates to what Thomas was saying, say, there’s a company that’s touting really great things about their device and which has gotten the patients really excited and interested, but there’s a lot of questions by experts on that. When does a patient have that right to say, no, I want it versus…?

Sydney Cash:
Are you talking clinical trial now or are you talking an already approved device?

Vanessa Tolosa:
As a product, I guess any brain implant like a clinical trial or, can they demand it?

Matt Angle:
I should say Vanessa was one of the leading technologists in the founding team of Neuralink. And now, she’s a consultant that works in neurotechnology for a number of other companies.

Leigh Hochberg:
Depending on the time point in the development of the technology, and I do think she was referring to many of those steps and in her questions, the guiding answer, at least in the US medical system is that the patient or if that patient is unable to speak for themselves, somebody who is providing a substituted judgment and speaking on behalf of that patient, that the guiding sense that autonomy rules and it is the patient’s decision, whether they’re going to have any procedure performed, that wins. It is what drives nearly every medical decision. The days of so-called paternalism of decades ago have been greatly, greatly reduced, and some would argue perhaps reduced to excess. So at a first pass, certainly when we’re thinking about clinical trials, surgical procedures are performed with informed consent.

Matt Angle:
But I think her question was going in a different direction, which is what if a patient wants a device that the physician…

Leigh Hochberg:
When we get to that point, so even though you have an approved technology, at that point, there’s some more market force I think that appears. Even if somebody has a right to a technology that doesn’t mean that somebody else has a duty to provide that technology. And I will I’ll look to the emphasis on the call to put a finer point on that. But, once something’s available, if it’s actually serving a use, then in all likelihood, there is going to be somebody who’s willing to provide that technology. So, I see Tim is ready.

Timothy Brown:
I’ll jump in. I’ll jump in here. I conducted a number of interviews with people using deep brain stimulation for essential tremor, and one of the things that I’ve found with one of the research or interviewees was that there are roadblocks there. So if a person demands the technology, if they demand the surgery, the device, they can’t necessarily get it immediately, right? So in the case of essential tremor, a person has to go through a regiment of medications, right?

Timothy Brown:
Someone has to make the diagnosis that their essential tremor is refractory, it resists treatment. And so, only after going through a number of drugs, a number of interventions, a number of therapies, lots and lots of meetings with neurologists and surgeons, and so on and so forth, can they get that technology. And in a lot of cases, that’s a good thing because you don’t want to get the device, get a device implanted when a much cheaper intervention would have worked. These devices are expensive after all, and I won’t say that drugs in the United States are inexpensive, but they are certainly less expensive than the newest device from Medtronic or Boston Scientific or NeuroPace. And so, there is at least one person in my study that says, I wish I could have gotten this first.

Leigh Hochberg:
Yeah. Syd, I hope is going to jump in a moment. To your point of that there’s this long, potentially long period of having to go through several medications or several evaluations and having all of these clinicians that are involved in allowing that gate to be opened, even if there’s somebody who’s demanding the technology. I think everybody’s seen that that’s generally a good thing. It’s a good thing…

Leigh Hochberg:
I think I heard you say that, that’s generally a good thing. It’s a good thing if that group that is that who are the gatekeepers, if you will, are properly calibrated to when that technology should really be applied. And, as in any other endeavor, there are conservative approaches to providing newer technologies, there are leading edge approaches, and first adopter approaches.

Leigh Hochberg:
And in general, people get their medical care locally, and whether locally means really the physicians and therapists who live in their town or in their city, maybe locally also means at the nearest academic medical center. But none of those are necessarily representative of the “right answer.” And I think epilepsy is one realm where there’s really a wide range of approaches to this and it’d be interested to hear what Syd’s thoughts are.

Sydney Cash:
Yeah, I think you’ve both hit on really interesting issues. I think in addition to cost, by the way, I think it’s important to recognize that there’s also a risk and it sometimes comes down to the clinicians saying, “Boy, we’re more worried about this than the patient is, and it really doesn’t feel right for us to go ahead and do procedure X, Y, or Z, even though the patient seems to want it because we’re worried it’s actually going to cause the problems. We’ve made that clear to the patient, and yet they’re still pushing for it.” That’s I think a really troubling and challenging situation and one that we’ve all probably been in at various times.

Sydney Cash:
So, besides the cost and so on, I think there’s lots of other reasons why patients might want something, but maybe the physicians don’t feel like it’s appropriate to go forward with it.

Matt Angle:
Isn’t epilepsy a great example though, because there’s so many anti-epileptic medications? How would you know if they’re refractory? It could turn into a kind of justice delayed as justice denied situation.

Sydney Cash:
Right, and historically there has been an issue about that. The time between first onset of seizure and surgery has been thought by many people to be way too long, because you could spend lots and lots of time going through those medications. And just getting to Tim’s earliest point, you also have some real potential inequities and structural problems here along all sorts of lines, that Leigh was talking about this, your care is local. So, if you’re in rural America versus in Boston, you might have a very different set of options. And if you’re in America versus a third world country, you have a different set of options. If you’re certain ethnic groups, certain age groups, et cetera, you’re going to have very different access to these kinds of technologies. And that’s going to come and play a big role as well, and [inaudible 00:29:04] real ethical issues there, no doubt.

Sydney Cash:
I’m going to try and push things a little bit into the speculative zone, if you don’t mind here, Matt, but we were talking earlier about what happens when we get to the stage where it’s not just people like Leigh and I fixing broken brains, but we’re pushing their brains even to other levels, and then who gets it, right? And then everybody for… Heck, if Leigh had an implant that would make me actually remember things, I’d go for it in a second. But should I get that? What makes it possible for me to get that compared to somebody else?

Leigh Hochberg:
Yeah, I hope that we get to that a more speculative discussion in a moment. I just want to underscore one of Syd’s themes, which is that the risk-benefit equation that we all hope is central to the decision of whether to take any particular medicine, or to begin to use any particular device, certainly to have a surgery, to have any particular device place, in order to make that equation, one needs to decide what is the risk and what is the benefit. And any individual patient, a particular user of these technologies, may have a very different way of looking at both risk and benefit than the group of clinicians who are simultaneously involved in helping to make that same decision.

Leigh Hochberg:
And when this comes back to the questions of autonomy or the premise of autonomy, we really do commonly want to rely upon the risk-benefit decision-making of the individual who’s exploring a therapy and that conflicts, if you will, or friction certainly comes up, when the individual who’s seeking therapy has a different sense of that balance than perhaps the team of clinicians that are helping to guide them.

Matt Angle:
Amanda, is there a relevant case, or what’s legally controlling here when patients don’t see eye to eye with their clinicians?

Amanda Pustilnik:
I’m going to do the thing that politicians do and answer a slightly different question before that, because actually they’re closely related. I think the answer to your question depends on how are we segmenting the kinds of technologies that we’re talking about and their benefits.

Amanda Pustilnik:
So, in the comments that everybody’s made so far, everyone’s been gesturing toward the difficult to discern sometimes difference between a therapeutic and an enhancement. We might wind up having different markets for therapeutic things that are categorized as therapeutics versus things that are categorized as enhancements. But Syd’s point about memory is a perfect illustration that something can be both, even within the same domain, it’s just where’s the baseline and how are we norming it. And then it also segments by price and by risk.

Amanda Pustilnik:
So, if we’re talking about a relatively low risk procedure that is in the nature of an enhancement, so it’s not going to be as tightly regulated, you’ll have a model that looks a lot more like plastic surgery, and where there’s a lot more medical tourism as well to markets that may be less concerned about that very careful risk-benefit analysis and where your purchasing power may go farther.

Amanda Pustilnik:
If we’re talking about a clear therapeutic, then we’re in more familiar medical territory of cost and risk versus benefit. And cost really has to do with how we pay for things and what the insurance companies determine probably in negotiation with the BCI companies. At what point is it a worthwhile trade off? And that is partly a medical and economic decision, partly probably political/horse-trading kind of decision too. I haven’t worked at an insurance company, but I’ve had a peak in a little bit.

Amanda Pustilnik:
The very general standard of when can a patient consent or not consent has a lot to do with… I have not practiced medical ethics in a hospital, but my understanding is that ethicists and physicians, in practice, want to see that the patient can make a rational choice. There’s a famous case of a patient who had a gangrenous limb that was surely going to cause them to die in short order, unless it was amputated. And the patient was asserting simultaneously that they didn’t want to die, but they refused the surgery because they couldn’t grasp that it was an either/or choice. If the patient had been able rationally to say, “I’d rather die than have an imputation”, that would have been a determination. If they had, instead, rationally said, “I choose to live, but I know I can’t keep my limb”, that would have been the determination.

Amanda Pustilnik:
But when there’s a failure of rationality that frequently shows up through an impossible contradiction, or where the patient can’t pass more typical cognitive tests and an interview with a psychologist or psychiatrist, then we’re looking for a substitute decision maker. And in medical ethics, as long as we’re talking about truly medical products, there’s a pretty robust framework within ethics already.

Matt Angle:
Do you think that framework will be tested when we’re talking about medical devices and surgery? I mean, especially given the history of transorbital lobotomy, electroconvulsive therapy, there’s been some… I mean, in the last decade, there’s been some precedent of medicating patients against their consent or committing them if they meet certain criteria. But, would the standards for performing a therapeutic surgery or implanting a device be dramatically different than forcing someone to take anti-hallucinogenic, anti-psychotic medication?

Amanda Pustilnik:
Yeah, it’s a great question. I love this question, in part because I’m concerned that the approach that we’ve taken toward pharmaceuticals has left more people untreated and suffering than necessary. There’s public perception driven very much by the anti-psychiatry movement of the ’60s and ’70s, and the horrible excesses with lobotomy that you’re talking about, that people can be forced into medication. In fact, it’s an incredibly difficult process. It requires a court order. Most hospitals are, understandably, not willing to go to court. And even then most emergency medication orders are quite brief so that a person may not be restored to what we would think of as ordinary decision-making before they are voluntarily allowed to discontinue.

Amanda Pustilnik:
Some states do have these programs called Assisted Outpatient Commitment. Famously, New York has Kendra’s law, but these programs are very much in the minority throughout the U.S. So, the real coercion, I think, with psychiatry… And I had the honor of writing with Allen Frances about this, who chaired the DSM-IV, the real coercion is that people are coerced by their illnesses into these really difficult lives. And even when they seek treatment because of the starving of budgets of state psychiatric systems, they’re denied access. So there’s a real liberatory benefit potentially to these medications that people largely can’t get.

Amanda Pustilnik:
So your question, of course, is really about what’s the potential for overriding patient will or patient consent with an implantable device. The reason I’m talking about medication first is that it won’t be the case that medication is comparatively easy, and devices will be harder because it’s already the case that getting somebody on medication… Which, as you said, is theoretically reversible and certainly doesn’t require surgical intervention… that, that’s already really, really hard, almost impossible for families whose loved ones are in crisis.

Amanda Pustilnik:
So, it would seem to me to be that much more attenuated and unlikely that a person could be required to submit to an interventional procedure. But then there is the trickiness of the concept of consent [inaudible 00:37:49] could be offered to somebody as an inducement. This will shorten your punishment. This will allow you to be out with an ankle monitor, or some other sort of technological monitor versus a lengthy sentence in really difficult conditions in a state prison or a state psychiatric hospital. Then a person might have the so-called freedom to opt into that, but that is in no sense of free choice. And that’s the domain where I might worry about it a bit more.

Timothy Brown:
This also worries me, worries me a great deal because we run into all kinds of problems with consent within the carceral system. And those problems with consent, they intersect with the very difficult problems we face with racial inequity in this country. Just speaking from a United States context… I know that things are different in other countries, but I do know that colonialism is a problem that many countries face, and they’re going to have versions of this problem… Black people are overrepresented in prisons. And so when we ask the question, “Who’s going to face these very difficult decisions about what they should consent to, what they’re being coerced into, and what they want to do about their freedom, if they want to opt into something so that they can be free?” People of color are going to run into that problem much more often than white folks are.

Timothy Brown:
And I think any analysis of these problems have to, at some point, confront a racial inequity because I know lots of people who’ve been to jail. That’s just what it means to be Black in America. I know lots of people who’ve been in this very situation. But if I talk to my white colleagues and friends, they don’t know very many people who’ve been in that situation.

Timothy Brown:
If I can get a little bit personal, I have a brother that’s been to prison, is in prison right now. And he’s not the only one in my family who’s been to prison, or is in prison right now. And upon being released to parole, he had to agree to take medications for his so-called psychiatric conditions. I don’t know what his specific diagnoses are, but he didn’t take the pills. He just collected them. They were a reminder that he was not free, or at least they were a reminder to me that he was not free, that he still had some court intervention in his life.

Timothy Brown:
So, this is a difficult problem, how will neurotechnology intersect with these already difficult problems that people of color face in this country when they collide with the criminal justice system. And there’s a healthy, robust debate over what people are calling mandatory neurointerventions in prisons. Just sort of recounting the history of psychosurgery and how, say for example, someone who was considered a moral deviant because they were homosexual, was court ordered to undergo hormonal therapy.

Matt Angle:
Alan Turing is a very famous and very shameful example.

Timothy Brown:
Yeah. Yeah, he killed himself over this and that’s a very difficult history to confront because it’s ongoing. We do have a prison industrial complex in the United States, and it’s frightening to think what will happen when some technology like transcranial direct current stimulation is demonstrated as an effective treatment for anything and used widely in prisons for psychiatric conditions. Will it be as simple as someone being told, “Hey, stick your hat in this box and you’ll get a few years taken off your sentence.” I could easily imagine that. We don’t need a Black Mirror episode for that.

Matt Angle:
I’m really glad that you brought up this idea. One of the other questions that we have from one of our previous guests broaches the idea of social inequity, but from a different angle. Vikash is a professor at UC San Diego.

Vikash Gilja:
How can we as researchers be proactive in ensuring equitable dissemination of neurotechnology? In the near term, we’re talking about access to medical devices. And in the longer term, we don’t know. We don’t know where it’s going to go. So, what can we do now to avoid societal ails?

Leigh Hochberg:
I’m glad Vikash got there because that’s exactly where I was headed as well, and I think we teach the same thing in our respective neuro-engineering classes and make sure that our students think about these really important questions. The other side of this concern is, we all hope that the technologies that are being developed are actually going to be helpful, that they’re going to restore communication. They’re going to restore mobility. They’re going to provide an opportunity for somebody whose mental health may vacillate from one of general health and wellbeing to one of mania or depression or cognition in direction that they wish it wasn’t headed.

Leigh Hochberg:
So, the goal for these technologies is to restore health and to restore function. And we want to make sure that they are equally accessible, and that is not always the case. It is often not the case. And that is, it may be no more true for an emerging neuro technology than it is for any advanced medical care or a medical procedure. But that definitely underscores the challenge. And I do think because we’re all here talking about this and, Matt, you’ve facilitated others talking in this room as well. And of course, there’s a wonderful neuro ethics initiative as part of the NIH Brain Initiative. As these technologies develop, it’s conversations like this and it’s the emphasis on ensuring that the benefit of these technologies are equally distributed, that are incredibly important. And this doesn’t just fall to society. This really does that the engineers, the neuroscientists, the clinicians who are beginning to develop these technologies and are working on them, I think we, collectively, have a role in ensuring that these technologies are ultimately available.

Timothy Brown:
I think I’ll also jump in to say that these problems of equitable distribution also hinge on addressing long-standing distrust of technologists, big tech. I’m not going to say that people are not justified in how much they distrust big tech, and also of the medical apparatus. And we’re seeing it even more now than ever that… The conspiracy is going around about the COVID vaccine being a Bill Gates funded ploy to implant chips into people. Amanda, you’re making a face because it’s absurd. It’s absurd. It really is. But-

Matt Angle:
It’s absurd because we all know that George Soros is the real master man behind this.

Timothy Brown:
Or, who is behind the dominion [crosstalk 00:46:45] front?

Sydney Cash:
Hugo Chavez.

Timothy Brown:
Hugo Chavez, oh, yes.

Sydney Cash:
From the grave.

Timothy Brown:
From the grave. His spirit moves chips into vaccines. And it’s absurd. So, but a lot of people are worried about the medical apparatus, and how it’s failed them in the past, and how there are potential hiccups for them in their lives. They’re difficult epistemic challenges for anyone who goes to the doctor and he might be dismissed out of hand to, or not listened to. Your testimony might be misinterpreted. And especially for people of color in this country, for women in this country, going to the doctor is, at times, a belittling experience. And to say, “Okay, how are we going to distribute these technologies justly?” Maybe I could flip that question on its head and ask, “How are we going to convince anybody to use these technologies in the first place besides Intrepid white guys who want to test the newest technology?

Amanda Pustilnik:
I don’t know Tim. I mean, I totally agree with you on the medical side and this issue of trust and respect.

Timothy Brown:
Yes.

Amanda Pustilnik:
And in the branch of law and neuroscience that I’ve focused on the most with chronic pain, that really disproportionately affects minority communities, and women, and inter-sectionally women of color, not being believed for their pain, not getting the treatment they need, being suspicious of being used as Guinea pigs for different kinds of interventions. But there’s going to be a vast area of BCI development, I mean it’s already starting, where it feels like fun. It’s consumer-oriented, it’s built into a gaming system. It offers to enhance your performance in some way that’s attractive to you, maybe not just your meditation and your sleep, but elevates your mood, or enhances sexual function, or something really fun that people want. And then I think… Although, there will still be issues about how it plays out through social structure, because we never leave social structure behind… across populations, people will think less about their privacy and the risks, and whether they trust the institutions and who’s really holding their data. And they’re likely to adopt it because we see this kind of avid consumer behavior even as to the companies that we say we distrust.

Amanda Pustilnik:
I’m going to bash the lack of privacy on social media in my Facebook post. Right?

Timothy Brown:
That’s right.

Amanda Pustilnik:
Yeah. And then all of that ancillary information that can be collected, will come back with different ramifications for communities that are less advantaged and that are more subject to surveillance because I think the overarching uniting theme for whether it is a medical technology that invites more distrust or a consumer technology that invites enthusiastic adoption is that we’re never apart from social structure. We might call technology as disruptive, and in some ways they are. The cart disrupts the horse and buggy, the freezer disrupts the ice man. And yet, they’re never totally disruptive because by being released into cultures with their particular advantages, and structural problems, and structural inequalities, and sets of institutions, they’re going to reinforce and be absorbed into all of those same patterns. And we should predictably expect that. It means that this is the right time to be talking about it and planning for it, as you know in your work.

Timothy Brown:
No, absolutely.

Matt Angle:
We’ve mentioned data a few times and Chethan, who’s a professor at Emory University, had a question about data and how we might extrapolate based on how our data is being used right now to the kind of peril we might face in a widespread BCI age?

Chethan Pandarinath:
An orthogonal concern for us all to worry about, I think when… At least when I got into DCI, the idea, as Konrad is saying, the idea of using BCI for healthy people was a little bit ridiculous because who would want the type of control that you were able to see 10 years ago? I guess one of the things that keeps me up at night a little bit is, we’ve seen over the last few years especially, misuse of private data. Let’s say Facebook and Cambridge Analytica, for example, really just egregious misuse. And so if that can happen when the data is all external and it’s social media, what can happen when those data are your thoughts that you are not trying to communicate to anybody?

Matt Angle:
God, it’s a good thing that Facebook doesn’t have a BCI program.

Leigh Hochberg:
Chethan was speaking to my opening salvo.

Leigh Hochberg:
Chethan was speaking to my opening salvo, if you will. This is so important. The data that are in the neural data are extensive, and they are more than we currently understand. So there is a really important, valuable effort to expand the sharing of data across all of science. Open science, make data available, allow others to analyze it, allow progress to be made more quickly by crowdsourcing the opportunity rather than to keeping data that’s collected in one lab there for decades, and all of that makes great sense.

Leigh Hochberg:
One of the many challenges is, at least today, for good reason, we obtain informed consent when people enroll in clinical trials, and we do our very best to describe how data are going to be shared. And it isn’t so straightforward to describe what could happen if we widely shared data, the contents of which we don’t fully appreciate today.

Leigh Hochberg:
In our own research, under carefully controlled conditions, if we ask somebody who has tetraplegia due to a brainstem stroke, for example, to watch a cartoon of an arm moving, and then we ask them to imagine that they’re moving their arm the same way, and then we ask them to attempt to move their arm the same way, those are three very different activities, even though there may be no outward evidence that they’re different activities.

Leigh Hochberg:
But the neural data, and the neural data from a few dozen neurons out of 88 billion, so just a few dozen neurons, we can easily determine whether that person was just watching that cartoon or imagining or attempting to move their own limb. That’s a lot of information about what that person is, unquote, thinking at the moment.

Leigh Hochberg:
And that’s perhaps a really boring example in a lot of ways, but even though we may be studying direction-intended movement of an arm, those firing rates, those neural activity patterns, we have every reason to believe that there’s going to be some difference in those activity patterns depending on the medication that somebody may be taking, that in many ways is the reason that some of those medications are being taken.

Leigh Hochberg:
For example, as Syd, who I’m looking at at the moment, knows, that somebody who may be taking a benzodiazepine is going to have a different EEG profile often than somebody who’s not. And it doesn’t take that much skill to be able to look at the two different neural signals and to distinguish that.

Leigh Hochberg:
So if we’re seeing what medication somebody is on, if we’re seeing more context in the neural activity than we might expect just by a quick glance at it, we have to begin to wonder what else is in there. Is there information about cognition? Is there information about intelligence? Just to really go far down the line here.

Leigh Hochberg:
What are we going to learn about the people who have been so extraordinarily generous in joining us in these pilot clinical trials from their neural data that they may or may not have wanted shared? And then how do we consent them to share that anyway in advance?

Leigh Hochberg:
That’s a challenge, and one way to solve that challenge is to say, well, maybe all of those data today shouldn’t be as widely available as other scientific data. And this is posed as a question, not as a statement. And it’s particularly important because, at least in today’s intracortical BCI research, the individuals often become known. We’re not talking about hundreds or thousands of people in a clinical trial. Today, there’s about a couple dozen.

Sydney Cash:
But Leigh, you can go even beyond that. Data from intracortical devices already in clinical use, already recording, that data is becoming property of the company that’s making the device. And if everything you say is true, which I think it is, in these trials where we’re talking about a couple of people, it’s likely to be true in all the clinical uses where we’re talking a couple thousand people. I mean, I think we’re already there. We’re already at Chethan’s point to some extent, where this question becomes real in many ways.

Amanda Pustilnik:
It’s an interesting choice of business models. I think that technologists and also entrepreneurs and venture capitalists, in deciding which companies to invest in, can have a role in shaping the industry as being ethical.

Amanda Pustilnik:
Companies can adopt privacy-first practices. If we’re talking about a wearable device or aspects of an implant where it’s possible to do edge computing so that you don’t centralize all of the data, or a company decides — not talking about research and data sharing, which is another complicated, interesting problem that I know less about than Syd and Leigh — companies can decide that they’re not going to become data businesses and that they’re not going to keep their user’s data past a certain point. They can have sunset provisions on how they keep it.

Amanda Pustilnik:
So there are really many choices that can be made before law and policy, just in terms of a commitment to certain kinds of principles of privacy in design that technologists could adopt today. And that is a really, really important ethical contribution where I think individual ethics of technologists and businesspeople and investors has less potential is on the critical issues that Tim was pointing toward about once the device is taken up into society, you really can’t architect around disparate impact on different populations.

Sydney Cash:
But to ask you a loaded question, Amanda, is that actually how you think things are going? I mean, I think given what we see in other industries where data is worth more than gold, in fact, data is worth more than the product, it’s hard to imagine. I mean, I’m just curious whether you’ve seen any evidence that that kind of self-control is actually happening. I would argue that what I see is sort of the opposite of it, but I don’t know.

Amanda Pustilnik:
So I know two cases, and they may well be the exceptions that prove the rule, and just relying on individual ethics to shape an entire really important industry has to be inadequate. The two cases I know of are a company — full disclosure, my sister is their director of business development — it’s called HealthRhythms, and they use multiple sensory signals as a kind of psychiatric early-warning dashboard. And their goal is to create a kind of a blended-care model between clinicians and patients.

Amanda Pustilnik:
And the other one is David Eagleman’s company, Neosensory, that does translate auditory signals into vibrations for hearing-impaired people, and then they learn to decode these vibrations as if they were sound. He said they decided to not keep as much patient data as possible or client data as possible.

Amanda Pustilnik:
But those two examples show me that it’s possible, not necessarily that it’s likely or that that should be a dominant strategy for what we do.

Timothy Brown:
Yeah, I think that it most likely won’t be the dominant strategy, but you can see some of the big players approaching their own technologies outside of the neurotech space that way. I know Apple famously has principles in place to not keep the keep data longer than a certain amount of time, and to do processing not in the cloud but locally.

Timothy Brown:
This has been Apple’s game for a long time, to put neural processing in their system on chip, on the iPhone, on the watch, on all of their devices, so that they can do what Amazon does, what Google does, locally instead of in the cloud.

Timothy Brown:
But simultaneously, cloud computing is a democratizing force. If you have a startup company idea, and you only have so much venture capital, it’s very economical to go to Amazon Web Services and get their cloud storage and get their computing platforms set up for your purposes.

Timothy Brown:
And that means that the data is going to be out there. It’s going to be out there somewhere. And it’s easy to share the data. It’s easier to glean certain kinds of insights from it.

Timothy Brown:
And this is, I guess, a segue into something that I’m worried about when it comes to data sharing. I’m fairly worried about the free sharing of data with folks, but I’m also worried about the specific insights that they’ll try to arrive at. I’m worried that the neurotech space is a space where we’re going to be pressured to make certain conclusions about the data, ontological conclusions about what we’re looking at or what kinds of salient features there are in the neural data, as we come try to come up for biomarkers of conditions, biomarkers of mental states, biomarkers of things that aren’t neural states, like Leigh mentioned, whether or not a person is taking drugs or not.

Timothy Brown:
I think those are the spaces where we should be worried. Because in other adjacent technologies so far, this is the example I like to go to, facial recognition and emotion recognition from facial recognition data, from facial data, we see people making conclusions or fixing ontologies of emotions. So these are the kinds of emotions that exist, happiness, sadness, fear, et cetera. And there are only seven of them, and there’s an entire psychiatric enterprise behind it.

Timothy Brown:
But will I be recognized in that framework? Will I be recognized by that ontology? Is that the right ontology? And when we think about things like early-warning systems for psychiatric disease, that pull from the neural data to reach conclusions about us, will they operate with the right ontologies? And will anybody know what ontology they’re contributing to when they contribute their data to it? And I don’t think they can.

Timothy Brown:
And I think this goes back to what Leigh was saying earlier about it being very hard to write an informed-consent document, to get informed consent in general, because we have to do this almost impossible imaginative work, this moral imaginative work, to figure out what’s salient and what’s not, and make sure that people are informed of that.

Timothy Brown:
And in some ways it’s unfathomable until we get closer, but I don’t want to say it’s impossible. I want to say that there are certain things that we should do to safeguard ourselves against more obvious and less obvious problems, like working with communities that have been historically marginalized in this space, right. Like don’t make systems that call women hysterical. That’s a thing, just to say it bluntly. But yeah, these are the kinds of worries I’m worried about when it comes to the sharing of neural data.

Amanda Pustilnik:
Can I ask a question on the sharing of neural data? Have you heard of the Global Brain Data initiative? And what do you think of third-party data strongboxes or sharing systems?

Matt Angle:
I’m not familiar with that.

Amanda Pustilnik:
Is anybody else familiar with it?

Timothy Brown:
I’m familiar with it, but I haven’t dug into it enough to give initial impressions. Could you give us a little bit more?

Amanda Pustilnik:
Sure. I don’t know that much about them. I am starting to learn about them. If you don’t mind, can we come back to that in a few minutes and I’ll give you a better summary. I just have to check. I know that I happened to have found it.

Matt Angle:
I won’t give Amanda any time, because I have a question for her now related to data, and specifically what, if any, privilege brain data enjoys. And just by way of let’s say anecdote, if I do something I shouldn’t have done, and I’m pulled into court and asked to talk about it, I can say, “No, thanks.” The government doesn’t have the right to query my brain using my mouth.

Matt Angle:
But if I suffer from a memory deficit and I have to write everything down in a notebook to remember it, it’s much less clear what the standards are for seizing the notebook. And certainly if I have it in my cell phone, there are examples of people being forced to give up cell phone passwords. And how will a brain computer interface be treated when a part of a person’s cognitive process is kind of uploaded onto a device. Do we have any idea how that would be treated? Can we find good cases?

Amanda Pustilnik:
As Americans tend to treat the Constitution as if it were a religious document, and yet it doesn’t offer us nearly as much protection as most of us think that it does. In particular, the Supreme Court has not been called on in any substantial way, and therefore hasn’t developed a jurisprudence around the physical instantiation of what we might think of as our mental or neurological self.

Amanda Pustilnik:
It’s been able to get by on a pretty underdeveloped physical-mental duality, where prior case law has held that that which is physical is not protected by the Fifth Amendment, and that which is mental and testimonial is protected by the Fifth Amendment. So you can invoke your right to be silent and sit there and say nothing, but we could potentially use effect recognition software and have your face tell the story when we show you the picture of the victim, right? And if you’re not the suspect, you’re not the one under investigation, you don’t have a Fifth Amendment right anyway.

Amanda Pustilnik:
But the products of the body have not been given protections. So pursuant to a warrant, a person’s blood or breath or urine, or pursuant to a warrant exception, can incriminate them. So I don’t have to tell the police, “Yes, I’ve been drinking,” but they may be able to compel a blood sample from me. And my blood will say that I’ve been drinking even more effectively than I could speak it myself, especially if I’m really drunk.

Amanda Pustilnik:
But when we talk about neural data, it’s a really interesting hybridization of those two maybe false categories to begin with, because it’s the physical stuff of that which allows us to infer what the person in some senses is thinking or what their physical state is as represented by their brain.

Amanda Pustilnik:
I was talking to a company also based out of MGH, I think, that is developing an fNIRS technology to detect THC intoxication, and they hope to sell it to law enforcement, understandably, because there really aren’t good markers for driving while high in the same way that there are for driving while drunk.

Amanda Pustilnik:
And as a sidebar, there’s a potential equity-improving story with this device, because officers are notoriously biased in who they pull over for being high and who they conclude, when they’re talking to them, is or isn’t high. So maybe a technology could be better, although it may present issues of its own.

Amanda Pustilnik:
But anyway, I would say that the signals picked up by an fNIRS device would have no special protection at this point, just because it’s neurological data, and probably less protection than blood, saliva or urine. The standard for whether your physical products can be seized by the state or searched by the state is governed by the Fourth Amendment. What is your reasonable expectation of privacy balanced against the state’s need? It’s a pretty flexible test that is less protective than most people think.

Amanda Pustilnik:
And when it comes to the physical body, jurisprudence has emphasized the degree of physical imposition that the state puts upon a person, not the amount of informational imposition or imposition on autonomy, selfhood, dignity. So we have more protection against a blood test because it involves a needle puncture through the skin than we would, at this point, for a non-invasive device that can pick up certain types of neurological signals.

Matt Angle:
Some of the most important cases that I remember were to do with bullets and retrieving bullets for evidence.

Amanda Pustilnik:
You’re absolutely right, the Winston case, where the state wished to require a surgery to extract a bullet to see if they could match it for ballistics purposes, and the Supreme Court held that that was too substantial an imposition.

Amanda Pustilnik:
But what’s really interesting about that case is that it’s not an absolute rule. It doesn’t say, “No, the state may never subject a person to surgery, even if they need a ballistics match.” Instead it says, “Well, they didn’t need the evidence that much, and the surgery probably would have had the following risks.” And so it leaves the door open for the balance to go in the other direction if the need is heightened or the risk is lessened.

Amanda Pustilnik:
Same thing with the Kyllo case, which has to do with remote surveillance, which may be more like some future EEG-based system, where Justice Scalia wrote that the use of these thermal imaging cameras to look into a home in part is prohibited because it’s an unusual technology. We expect that our homes can’t be looked into, but this technology violates that expectation, so we’re protected against it.

Amanda Pustilnik:
Now that cars with cameras have mapped the whole world for Google Maps, and you can look and see people standing in their windows with their curtains open, I wonder if that case would have come out differently, because we are aware of a more pervasive degree of surveillance.

Amanda Pustilnik:
So the most we can say based on jurisprudence now is that there is heightened regard for physical invasiveness and physical risk, and ongoing conceptual lack of clarity and lack of protection for the informational self, which may be more reflective of what we could think of as our truer self, and that if we want really clear protections, we should legislate it, there’s a role for legislation and regulation, rather than waiting for the court to decide it piecemeal based on the cases that arise over time, which can have a really, really long lag time as well, by the way, by the time things come through the courts.

Leigh Hochberg:
So, Amanda, there is, as you know better than I do, there is a Genetic Information Nondiscrimination Act. There is a GINA. I think I just heard you advocate loudly, maybe it wasn’t advocacy, you were teaching and I was learning, but I think it was actually advocacy for NINA. Do we need a neurophysiologic or neurologic information nondiscrimination act? Do we need to legislate very clearly and at least try to allow what many people would hold as true and dear, which is privacy of thought? And do we need to reduce that, if you will, to legislation?

Amanda Pustilnik:
Yes. Yes. Oh, I love that. So I think we absolutely need a NINA. And the National Academies convened working groups for quite some time to come up with a truly thoughtful draft for how genetic information should be protected. That’s a wonderful model, not just as a statute but as a process for what we should be doing with neural data.

Amanda Pustilnik:
And so, Leigh, if you would like to help me convince the NAS, let’s do it. And technologists and physicians who are in the field should be at the table, because if it’s imbalanced toward regulators or ethicists or lawyers, we ruin everything, it won’t be as right.

Sydney Cash:
Can I ask what I think of as a depressing question? I wonder whether, outside of this Zoom box or whatever we call it, how much this actual issue resonates elsewhere. I think the degree to which people have clearly sort of given up their information is remarkable. I mean, I do it, and I actually worry about it. Do we really think that people are going to worry as much as we’re worrying? And my worry is that people don’t, and that’s actually going to make all of this much harder.

Matt Angle:
I think people underappreciate how exposed they are giving their data to private companies. But my experience is most people are still aware of the danger of the government peeking into your life, because Google and Facebook can’t put you in jail. There’s little recourse when the government starts opening up your mind. I think even some of the most anti-regulation tech CEOs are very… Tech companies are very hesitant to give over their priceless data to the government. No one wants the data to end up there.

Sydney Cash:
This is quite some sort of Rorschach test for where we are on some sort of sociopolitical spectrum. I worry more about industry than I do about the government for various reasons, but there’s no particular… It’s probably more of an emotional thing than anything else.

Sydney Cash:
But I think we give up our data both ways, right. We give up our data to pretty much everybody now with very little thought, and it seems hard to me to convince the world that if it becomes super convenient to be able to have a chip in your head that helps you do X, Y, or Z, that convenience will trump any desire to maintain some hold on individual information, I suspect.

Timothy Brown:
I think that my biggest concern is with military contracts within corporations. I worry about that a great deal. Or any kind of government contract within a company. But I also think that, yes, the paradox of privacy, this idea that people give up so many of their privacy protections for some sort of consumer benefit, some sort of benefit of a product, is definitely alive and well. And I think that one of the biggest drivers of that paradox is how alluring the technology is, what they stand to gain.

Timothy Brown:
I remember there was this deep skepticism of having audio recorders in your home. Like when Amazon and a number of companies started coming out with these smart speakers, people were deeply skeptical of this idea and said that it wouldn’t fly. But now lots of people have Echo Dots or whatever it is, Sonos smart speakers, HomePods, whatever it is.

Timothy Brown:
And different companies have different commitments to privacy and different methods of data analysis. They can say all they want that they don’t analyze data that isn’t pertinent to asking Alexa what the weather-

Timothy Brown:
… pertinent to asking Alexa what the weather is like, but we know that, from other practices of those companies, that they don’t have that deeper commitment.

Matt Angle:
It’s a bit like eating inside a restaurant with your mask off. You’re getting the wrong kind of negative feedback. Initially, everyone says, “Holy shit, I don’t want to get COVID. I’m not going to do that.” But after you put yourself repeatedly in dangerous situations and don’t suffer a consequence, people get more lax, even though the risk is the same.

Matt Angle:
I think it’s the same with the smart speaker, that no one kicked in their door, they didn’t see their voice on television, and they got the impression that there was less risk associated with the smart speaker than there was when they first feared it. But I think it’s more about the sort of non-event feedback that lulls people into complacency with risk.

Timothy Brown:
I also think it’s an ignorance, a certain kind of ignorance about how the technologies work. It’s really difficult to ask someone to be an expert on neural networks and data-sharing practices before they buy a smart speaker. And it will be similarly difficult to say, “Okay, you want the benefit of this neurotechnology, but do you know what’s required to get the thing that you want, where you think a thing and it appears on your computer screen?” That’s a difficult process, and it requires giving up so much.

Sydney Cash:
But it’s analogous to exactly what Leigh was saying before in a way, right. And I think that the average person, even the non-average person, doesn’t really understand the worth, value and utility of the data they’re giving up, just like we don’t really understand what’s all embedded in the neural signals that we are collecting and looking at.

Sydney Cash:
People like Leigh and I bet a lot that there’s a lot more information there, but we don’t know what that information is. Similarly, I think people, your average consumer, doesn’t really understand the degree to which the fact that their cell phone tracks where they are actually could be used for any number of things that they don’t even imagine, both as an individual and as a society. I think there’s a real analogy there to what Leigh was talking about before.

Leigh Hochberg:
We seem to be sharing our fears at the moment for this [inaudible 01:20:24], so let me share one of my fears, which is that, and it’s not the-

Sydney Cash:
It’s spiders. You hate spiders, Leigh, right, is that it?

Leigh Hochberg:
Yeah, well, there’s that, and then there’s I am concerned that the fear of the improper use, the improper secondary use of neurotechnology, the fears that the technologies, once developed, might evoke or exacerbate social ills, that those concerns might slow the development of incredibly powerful, useful and needed neurotechnologies that can restore communication, can restore mobility and can help people with any number of neurologic or psychiatric syndromes.

Leigh Hochberg:
And that’s not to minimize the importance of what we’ve all been saying, but some part of what we need to do, I think, is to make sure that the technologies that are being developed are used properly, are in the right social context, and that we’re all aware of how they could be used.

Leigh Hochberg:
But at no moment should that yellow flag slow down what is desperately needed right now, because when I walk into a neuro intensive care unit, and there’s somebody who was walking and talking yesterday and is unable to move and unable to speak today, we need the neurotechnologies that are being developed right now so that that person can communicate again.

Leigh Hochberg:
And the moral imperative there to continue the research that’s happening, I think just it’s strong, it needs to continue, and in conversations like this, make sure that when we actually get there, that the technologies are going to be used properly. But I just want to, I guess, remind myself, if nobody else, that these discussions are not intended in any way, and I certainly hope that they won’t slow down the progress that’s so important to make.

Timothy Brown:
Would you all indulge me by allowing me to ask a question? So I was giving a talk at a comic convention, which is probably the most interesting place I’ve given a talk. And we were doing a panel on the future uses of technology, and I was there representing the neurotechnologies and the ethical issues therein. One person, after hearing me speak a little bit about these technologies, asked me a very pointed question, and I want to ask you all the same question.

Timothy Brown:
Do you see the goal of neurotechnology as to erase or discontinue or fix disability entirely? So put it in another way. If neurotechnologies are extremely successful, and there are enough to meet the needs of many people and in a lot of different contexts with a lot of different disabilities and different conditions, do you think that we’re moving toward a world without disability?

Leigh Hochberg:
As you know, this is a critical question, and it’s been wrestled with I think most openly and proximally by the community of people with hearing impairment, hearing loss, deafness, and from the context of cochlear implants, and that the push to restore hearing is interpreted, and felt more than interpreted, is felt by some as an assault on a culture.

Leigh Hochberg:
So when that same important breadth of question is applied to the development of additional neurotechnologies, cochlear implants is one of them, deep brain stimulators, central cortical brain-computer interfaces, spinal stimulating technologies or any of the others that we might speak of, these are options. They are not mandatory. They are not overtly guided by a desire to see them become pervasive.

Leigh Hochberg:
They’re motivated by extraordinary frustration that we don’t have good treatments for many severe neurologic injuries and diseases today, and we are desperate to develop ones. And when I say we, speaking about the patients and the families who have these disorders. We are looking for solutions.

Leigh Hochberg:
I’m an optimist. I obviously am biased, but I think that the technologies that are being developed are going to become powerful, flexible, available restorative neurotechnologies. I don’t think that that is motivated by a bias. There’s a concern that reducing physical disability is by itself wrong.

Leigh Hochberg:
I think that was a convoluted way of saying that it is a really important discussion. There will continue for some time to be some very challenging injuries and disorders that I also hope that we can provide the option of tools that would improve what any individual feels to be an improvement in their own quality of life.

Leigh Hochberg:
But at no point should any of these technologies be held up as, “This is something that you should do so that you can now do something that you were previously able to, but you can’t anymore.” The goal is to foster autonomy, to restore autonomy, to allow somebody to make a decision or to complete an action that they currently can’t, which includes making the decision not to use that technology. And so I think that was a long and inelegant answer your question.

Amanda Pustilnik:
I thought it was very elegant and persuasive and deeply thoughtful. So I’ve thought in a more situated way about disability in the last several years than I had in the past, because a friend of mine has a son who has nonverbal quadriplegic cerebral palsy. And through their experience, that’s helped me to understand better the social determinants of disability, and that disability exists in context. It’s not just a property of physical deficits, but also of how much support you have and how you’re viewed and treated. This is probably basic to anybody else who had thought more about disability than I unjustifiably had not before.

Amanda Pustilnik:
However, his disability has also showed me how heterogeneous and over-inclusive the term disability is. You were speaking about ontologies before. There’s a lot that goes in that bucket, disability, that maybe doesn’t all belong together, and that this child’s severe cerebral palsy, which affects both hemispheres of his brain, it causes him such suffering every day that couldn’t be ameliorated by any degree of social support, although it could be improved.

Amanda Pustilnik:
That’s different than somebody who might have the label of disability or protection under the ADA against disability discrimination who has high-functioning Asperger’s syndrome, who’s simply perhaps neurodiverse, not disabled. And maybe they don’t belong in the same group.

Amanda Pustilnik:
And so BCIs that can help people with categories of disability more involved with suffering could be quite distinct than the opportunity to have, or the obligation to have, a BCI for something that we might not think of as a disability in five or 10 years, or that might actually be a source of somebody’s strength.

Amanda Pustilnik:
So I think it’s a really important question that may play out, keeping the foundational principles of autonomy and respect that Leigh was articulating, that might play out somewhat differently, then, across different groups.

Leigh Hochberg:
Thank you for sharing that, Amanda. I think there’s so many topics that you just touched upon in there. One, just to start, I am optimistic that any number of types of brain-computer interfaces are going to be developed that may become useful and used by people with cerebral palsy to improve communication and mobility. And I’m optimistic that that will be true.

Leigh Hochberg:
And that leads very quickly to something we haven’t had the chance to talk about, which is when is the right moment in the life cycle, if you will, to deploy a new neurotechnology, which is really about when does research begin in children? And this is the question I’ve been asked by several families of children with cerebral palsy. When are we starting? When are we starting to ask the questions that we’ll be able to help a child earlier?

Leigh Hochberg:
And that is exactly the right question. It’s been wrestled with before with other technologies. It’s certainly been wrestled with with cochlear implants, which are now far more commonplace. And, Tim, I saw you nodding your head before, so please jump in. The development of these technologies is going to be important. And when do we start the research? How do we weigh risks and benefits, particularly this early on in the development of new technologies?

Leigh Hochberg:
Sorry, Matt, just one more thought. And how do we very carefully make sure that we’re being precise in understanding the differences and the implications of different types of disability, just as you were describing cognitive disability, motor disability, disabilities related to mood regulation, and in differences across a spectrum, a neurologic spectrum as, Amanda, you were just describing? Those are really important themes for us all to think about as we try to imagine and hopefully deploy restorative neurotechnologies.

Timothy Brown:
I think each of your comments really speak to the imperative need to figure out how culture will intersect with these technologies. Amanda, you’ve been saying this the entire time. I think we both have. But when it comes to figuring out what the ontologies on disability are, absolutely, and also what cultures will spring up as we navigate the use of these neurotechnologies and the distribution of these neurotechnologies.

Timothy Brown:
I think it’s going to take a lot of work to figure out what those cultures look like or what they should look like. I think the example of cochlear implants is an especially important one, because there are cultures that are springing up around cochlear implants themselves that are especially interesting to me, right?

Timothy Brown:
So I’ve talked to a number of people with cochlear implants who received them at a younger age, and speaking to the distinction between restorative and enhancement uses of technologies that we don’t necessarily anticipate. Several of them have said that now they feel like they have a super power over deaf people and hearing people, insofar as when they don’t want to hear something that someone has to say, they can just turn their hearing off. Or if they want to listen to music unencumbered and below people’s awareness, they can Bluetooth stream their audio to their cochlear implant, which is a thing that people in the cochlear implant devices manufacturing world have tried to push in this next generation of cochlear implants. And I think it’s fabulous. There is nothing more cyberpunk than that, I think.

Timothy Brown:
And so what kinds of cultures will crop up around the newer forms of neurotechnology? And I think that, yes, the imperative to get these technologies out there for the social benefits that they could provide are absolutely important. Deep brain stimulators for psychiatric conditions, for one, I think we’re all familiar with Helen Mayberg’s early research on DBS for depression, and how many of the people who participated in those clinical trials or those early studies have said that DBS has saved their lives.

Timothy Brown:
But then when we do interviews with people who use similar devices, we find that the cultures that are springing up around those devices are interesting and troubling. So for example, my group conducted an interview with a person who said that they have a parent who sort of keeps track of their moods, and when they’re not in the best mood, will say, “Hey, did you turn your stimulator on today?”

Timothy Brown:
And so going back to the unique issues that come out of neuroethics and neurotechnology, this idea that we have that kind of access to a person’s neurological state and that we can alter it with some level of immediacy that we haven’t had before. What kinds of relationships will form around those technologies, around those capabilities? We don’t know yet.

Timothy Brown:
And it’s just going to be very important for us to keep an eye out and build relationships with the groups of people who will use these technologies so that we can better anticipate how we will change cultures, because we will. And it’s imperative that we do that cautiously

Matt Angle:
Leigh, you brought up something really interesting just before the break, which was, there are a lot of uncertain and maybe low-probability negative effects of developing BCI, but there are also some very kind of near-term healthcare outcomes that are coming with deep brain stimulation right now, a with early kind of BrainGate work.

Matt Angle:
And I’m curious how everyone thinks about how do we balance the near-term positives and the long-term negatives? Because obviously, I mean, just to give an example, there are negatives of refrigeration technology. The chlorofluorocarbons used in refrigerators leak and cause holes in the ozone. They have a big carbon footprint. There are a lot of knock-on effects to having refrigeration. But it’s also one of the most important health advances in the history of humanity, keeping food cold before you eat it.

Matt Angle:
You could say the same thing about vaccination or clean water and sewage. There’ve been a lot of really big advances that had knock-on secondary effects, and obviously we’re glad that we went through with those developments.

Matt Angle:
I think many people, myself, probably Leigh and Syd, we think that BCI technology and neurotechnologies more general are a really big advancement in medical technology that’ll have a primary effect that is very good. And how do we think about balancing that against some of the things we’ve talked about already?

Timothy Brown:
Can I just jump in and say that I don’t think that anything we’ve said so far should or will result in a barrier to progress or innovation. I don’t think that’s the case. I think there is a tendency to think of neuroethics as a kind of policing activity. And I’m just here to say, and I think it’s very important to say, that I don’t imagine myself as an ethics cop. It’s not my job to go into people’s laboratories and tell them to stop doing research.

Timothy Brown:
And I think Leigh knows this about me personally, but I want to say to all the listeners out there that the work of neuroethics isn’t to say, “Stop doing this research,” unless it’s egregious, unless there are certain moral boundaries that the research crosses or the product crosses. That’s when a person should be told to stop doing that research. But I don’t think that’s where we are.

Timothy Brown:
I think that where we are is at a turning point where we can choose to learn from the mistakes of the past, from adjacent technologies, similar situations, and things that we know about social inequity, racial justice, disability communities and what they want, the lessons of research methods that we know exist like community-based participatory research, resources that are underutilized like the knowledge of all the communities that would use the technologies that we want to see succeed.

Timothy Brown:
And if we make use of all of those resources or we build the communities that we need to build to see progress, then that’s just a different form of progress. It’s not stymied progress. It’s a reconceptualization of progress such that it’s equitable, just and so on.

Timothy Brown:
So I think, yeah, how do you manage or how do you balance between the short-term benefits while also staving off the long-term harms? You just do things the right way in the first place, which-

Sydney Cash:
Yeah. To amplify, I think what Tim was saying is exactly right. And to go even further with it, by thinking of these harms, by thinking of these issues, by starting to come up with ideas like NINA and things like that, by having this all out in the open early, I think it actually has the opposite.

Sydney Cash:
It doesn’t stymie the research. It actually accelerates it, or accelerates the utility, it accelerates the adoption, because then we don’t have to worry that we’ve done something in a way that people are going to review as deleterious or problematic or what have you. And so the adoption will be faster. The acceptance, when it’s done appropriately, will be better. We’ll get to the communities that might not otherwise be willing to look at it because it’s been done in this way and these issues have been dealt with.

Sydney Cash:
So I think there’s actually a positive, an accelerant, by actually having these kinds of discussions and looking at these problems and thinking about them ahead of time.

Amanda Pustilnik:
Yeah, I hope so, and I agree, and I describe myself as very tech positive. One thing that I like about these conversations is that I think they help us look at areas where we perhaps should be activist beyond technology, and that the social problems we’re identifying as potentially then having ramifications throughout the use of technology really are social problems that need addressing.

Amanda Pustilnik:
The overmedication of kids in foster care, particularly kids of color in foster care, that’s not a drug development problem. We could identify that problem and it shouldn’t stifle pharma innovation. But if we’re aware and conscious, if working within drug development, that that’s how these agents might wind up being used, maybe that pushes us individually and our institutions to engage with those social dimensions of the problem for how we think our, your not mine, inventions may be used downstream.

Timothy Brown:
They’re yours too. You’re part of it.

Matt Angle:
It’s a community effort.

Timothy Brown:
That’s right.

Matt Angle:
Well, thank you all for your time. I think this was really helpful, and I hope that more comes of this. Maybe it’s NINA, or maybe it’s just more conversations with all of us. And I look forward to when I can see you all in person and meet you at the real pub.

Timothy Brown:
It’ll happen one day.

Leigh Hochberg:
Thank you, Matt.

Timothy Brown:
Thank you so much.

Sydney Cash:
Yeah, it’s a great discussion.

Amanda Pustilnik:
And wonderful hearing all of your perspectives. Tim, Syd, Leigh, I really appreciate it. Thank you.

Timothy Brown:
Yeah, it’s great to hang out with y’all, and this was great.

Follow us

Have questions?

Get in touch
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.