Optics in Neural Recording, Part I

Episode 11 - Let there be Light

Apr 11, 2022
Welcome back to Neurotech Pub! This episode is one of a two part series on optical methods for recording and stimulating neural activity.

Episode 11 - Let there be Light

Welcome back to Neurotech Pub! This episode is one of a two part series on optical methods for recording and stimulating neural activity. 

Our guests on this episode are Elizabeth Hillman, PhDMark Schnitzer, PhD, and Jacob Robinson, PhD. So far, our technical dives have focused mainly on direct electrical recording and stimulation of neural activity, but in this episode we deep dive into advantages that all-optical interfaces might have over electrical interfaces, and the challenges in developing them. 

In addition, we talk about running highly collaborative, interdisciplinary projects that span traditional physics and engineering with biology, a theme that is ever-present in neurotech and is also highlighted in part two of this series. 

Cheers!

Show Notes

Latest news & publications since recording: 

>> Hillman Lab: New publication on SCAPE in Nature Biomedical Engineering

>> Robinson Lab: Review article in Optica on Recent advances in lensless imaging

>> Robinson Lab: BioRxiv pre-print on in vivo fluorescence imaging

1:23 | The Heart and Soul of a Paper

2:32| Ultrasmall Mode Volumes in Dielectric Optical Microcavities

3:01 | Robinson Lab

4:01 | Hillman Lab

4:07 | Zuckerman Institute

4:15 | Schnitzer Lab

4:25 | Howard Hughes Medical Institute

4:41| Miniature Fluorescence Microscope

9:02 | Discovery of DNA Structure and Function

10:25 | Hodgkin–Huxley Equations

13:49 | Vessel Dilation in the Brain

16:03 | State of the art of Neural Optical Recording

18:03 | Long-Term Optical Access to an Estimated One Million Neurons in Mouse Cortex

24:56 | Watch the Crystal Skull video

27:45 | High-Speed Cellular-Resolution Light Beads Microscopy

29:54 | Relationship between spiking activity and calcium imaging

32:50 | Analytical & Quantitative Light Microscopy [AQLM]

32:59 | Imaging Structure & Function in the Nervous System

35:22 | NIH Brain Initiative Cell Census Network (BICCN)

35:54 | Allen Brain Atlas: Cell Types

40:17 | A Theory of Multineuronal Dimensionality, Dynamics and Measurement

46:19 | Dr. Laura Waller's DIY Diffuser Cam

50:38 | FlatCam by Robinson Lab

53:42 | Advantages of MEG

55:06| Random Access Two Photon Scanning Techniques

56:07 | Swept Confocally-Aligned Planar Excitation (SCAPE)

58:47 | Optics Systems for Implantable BCIs

1:00:43 | GCaMP - Janelia GECI reagents

1:01:33 | DARPA NESD Program

1:04:06 | SCAPE Microscopy for High-Speed Volumetric Imaging of Behaving Organisms

1:07:00 | Glial Response to Implanted Electrodes

1:07:07 | Brain Tissue Responses to Neural Implants

1:09:36 | Two Deaths in Gene Therapy Trial for Rare Muscle Disease

1:10:46 | Intrinsic Optical Signal due to Blood Oxygenation

1:11:11 | Coupling Mechanism and Significance of the BOLD Signal

1:12:10 | DARPA invests in Treating Mood Disorders

1:12:57 | Amygdalar Representations of Pain

1:13:48 | Fast Optical Signals: Principles, Methods, and Experimental Results

1:14:12 | Dr. Larry Cohen's early work in Neurophotonics

1:14:42 | Linear Systems Analysis of Functional Magnetic Resonance Imaging | Additional Resource

1:16:20 | Flavoprotein Fluorescence Imaging in Neonates | Additional Resource

1:18:02 | Pumped Probe Microscopy

1:19:26 | Biological Imaging of Chemical Bonds by Stimulated Raman Scattering Microscopy

1:19:36 | Coherent Anti-Stokes Raman Scattering microscopy (CARS)

1:19:55 | Min Lab @ Columbia

1:20:06 | Glucose Analog for Stimulated Raman Scattering

1:20:39 | Emerging Paradigms for Aspiring Neurotechnologists

Want more? 

Follow Paradromics & Neurotech Pub on Twitter  

Follow Matt, Elizabeth, Jacob & Mark 

Listen and Subscribe

Read The Transcript

Matt Angle: 

Welcome back to Neurotech Pub. So far, our technical deep dives have focused on electrical methods for recording and stimulating neural activity. But in the next two episodes, I want to talk about optical methods to do the same. What advantages would all optical interfaces with the brain have over electrical ones and how far away are they? What are the major remaining challenges?

Our guests today or Elizabeth Hillman. Elizabeth is a professor at Columbia University, Mark Schnitzer, a professor at Stanford University, and Jacob Robinson, Associate Professor at Rice University. In addition to talking about optical methods, we also hit on some really general points about how to run highly interdisciplinary and integrative projects that span across physics, engineering and biology.

This is a theme that we'll pick up again in our next episode. And I think it's fascinating to hear the different perspectives on collaboration across fields and disciplines. If you want to dig in on the topics we mentioned today, be sure to visit paradromics.com/neurotechpub for detailed show notes and references, and make sure to follow us on Twitter @NeurotechPub for the latest announcements about our upcoming episodes. Thanks, and enjoy the episode.

Matt Angle:

Thank you everyone for coming here. By way of introductions, I was hoping that everyone could introduce yourself and then I'd be really interested in knowing if there's a particular paper in your career that you're very proud of. Not necessarily something that's highly cited or you're very famous for, but something that you felt like you poured your spiritual essence into through your career. I think a lot of academics have one paper that they're particularly attached to, almost independent of the impact that it will end up having. And I think that would be a great way for the audience to get to know you.

Jacob Robinson:

That's super embarrassing, Matt. How do you want to do this?

Matt Angle:

You definitely have to start first now that you feel embarrassed.

Jacob Robinson:

So I was thinking about this. It is a really interesting question. And the thing think that first comes to mind was a paper that I wrote my first year in grad school. And it has nothing to do with neurotech and that's why I'm embarrassed by it. It's a short theory paper that we wrote to PRL and it was when we realized that the mode volume and our resident cavity could be made smaller than Lambda over two, which I think I was surprised by because I had always thought that the smallest box you could confine a photon to was a half a wavelength.

And unless you wanted to use a metal and use plasmonics or something like that. And then we showed that we could do that in a dielectric. I think there was three figures and three equations. And I was just happy that there was like, oh, there's a new result that seemed kind of fundamental. And it has nothing to do with my career now. So that's, I guess why it's embarrassing.

Matt Angle:

Did you feel like you'd earned your spurs?

Jacob Robinson:

I think that was part of it too. Yeah. I was a physics major as an undergrad and then be able to publish in PRL my first year made me feel like, all right, I'm a scientist now.

Matt Angle:

Oh yeah. And who are you?

Jacob Robinson:

I'm Jacob Robinson, associate professor in electrical and computer engineering at Rice.

Matt Angle:

Thanks. Elizabeth, can you think of a paper that you have particular attachment to?

Elizabeth Hillman:

Your question made me sad. Every single paper that I put out is harder than the children I gave birth to. And then, unlike my children, I can't ever look at them again. Every single one I would say is just a labor of love. I have a tendency to always want to do a new thing. So usually the papers I'm writing is somewhat new to me. It's always kind of a big deal when we go after a particular thing. And it's also really hard to bring the students and the postdocs along with you where you're sort of taking risks and you have to warn them, bad things might happen or good things might happen. And you never really know.

Matt Angle:

And can you introduce yourself as well?

Elizabeth Hillman:

I'm Elizabeth Hillman, I'm a professor of biomedical engineering and radiology at Columbia University and part of our Zuckerman Mind Brain Behavior Institute.

Matt Angle:

This time we'll do it right. Mark, can you introduce yourself and then tell us about a paper?

Mark Schnitzer:

Sure. I'm Mark Schnitzer. I'm a professor of biology and applied physics at Stanford university. I'm also an investigator of the Howard Hughes Medical Institute. In terms of a favorite paper, I think Elizabeth said it very well. I feel like I pour my heart and soul into every paper. So I am proud of essentially all of them, but I do value having impact on the rest of the research community. And I think looking at it through that lens, which is only one lens that one can take, I am pleased by the impact that the miniature microscope technology has had on other people's research.

Matt Angle:

Perfect. Thank you. Something that the three of you all have in common is that you all have very strong engineering and physics backgrounds. I'm curious, have you had an experience coming into biology where you were humbled by the complexity or messiness of a system that you initially thought was quite simple? I think that's often a shared experience from a lot of people who come from the physical sciences and then start getting their hands dirty in kind of the wet stuff. And I'm curious if you can think of something in your careers that surprised you.

Elizabeth Hillman:

For me, it's been a little bit the other way around. Sometimes I've found things have been reasonably simple when I've been told they're very complicated. The physics way is to sort of look at the really big picture and the small picture and try and sort of say what makes sense here. And then just to go after that, even though people tell you that you're wrong and it's not going to work.

And so it's a little bit the other way around for me, but then maybe I just haven't actually into the messy complexity enough to really appreciate it. I think neurosciences, a lot of people focus on very small things. They're very specific, very selective, like one area of the brain or one type of cell or one organism. And I can't do that. I'm all over all the organisms, all the scales, all the time scales.

And I'm sort of looking for these unified ideas rather than the little pieces. So when I find someone who really intensely knows a lot about one brain region, but nothing about the adjacent brain region, then I feel like there's maybe some simpler things that we're missing rather than the massive complexity, which is why it's hard to publish my papers, I suppose.

Jacob Robinson:

It's interesting to hear you say that Elizabeth, I wish I had that experience. I often look at ways to stimulate neural activity using different physical mechanisms. Recently, I've been very frustrated by the variability in cell responses and coming from the fact that it's very hard to prepare a group of cells the same way in two different experiments. In that sense, when you look at things, specifically the things I've been looking at is like magnetic stimulation of cells is incredibly variable, according to things that I can't control. And so I feel like I've been humbled by the fact that I like to think of cells as being the same every time but they are so different multiple time scales.

Mark Schnitzer:

Obviously one of the major differences among several between physics training and biology training is that physicists are not trained to think about function in the same way and, relationships between structure and function, relationships between dynamics and function and function is not part of the story and the way that it is for someone who's been trained in the classical biological sciences.

And so, learning to relate what we're finding in the lab to how it might be useful in some capacity for an organism, it certainly is one thing that I've had to stretch myself for. And I think to be a little bit more specific, that means learning how to perhaps interpret data in a certain way. So, it's all well, and good to say that one is thinking about function, but how does one you do that in practice? And I think it's how you approach the interpretation of data and sort of how you use the existing data perhaps to generate new hypotheses about the next experiment, for example. Another aspect I think is stressing the importance of mechanism. I mean, now that I've been in neuroscience for a number of years, I often find at least to my own taste, that some of the papers of other physicists and engineers, I now regard, although I might not have regarded a while ago as being too descriptive and not seeking underlying mechanisms, which will help us to make new hypotheses and predictions. And so now I would say my own work, that's one way in which I have evolved to try to stress the pursuit of mechanism.

Jacob Robinson:

Mark, can I ask what you mean by mechanism?

Mark Schnitzer:

Sure. An underlying mechanism is, for a process, is more than just say a description, whether it be qualitative or parametric, even. You have your hands on mechanism when you have identified say the fundamental parts and components of whatever process you're considering, and you have your hands on their means of interaction. And so when you have those aspects together, it means that you can generate, for example, new predictions about entirely novel circumstances based on your understanding of the parts and how they work together in a way that is simply not possible when you just have descriptive.

An interesting way of looking at this is to look back in history and to see what mechanistic understanding has brought us and, to do this, maybe I'll make reference to two examples that most people know. So, one would be the double helical structure of DNA. If you look in the literature of the discussion about say, genes and genetics, changed entirely with the discovery of that structure, and it led to all sorts of new ways of thinking and new predictions.

And a little bit closer to home in neuroscience, you can see how the Hodgkin Huxley equations also led to non-trivial predictions based on their understanding of the mechanism for action potential generation. And I think a great example is anodal break. So if you go back to the original Hodgkin Huxley papers, the model makes a somewhat non-intuitive prediction, which they called anodal break, and we would call rebound firing today. So namely that if you hyper-polarize the cell, you can actually generate a rebound spike. And that kind of totally non-trivial prediction is what you get from pursuing mechanism.

Matt Angle:

It was actually fascinating how much insight they had without having any idea of the protein structures involved in terms of inactivation of the channels. And they really predicted a lot.

Mark Schnitzer:

Well, they discovered an activation.

Matt Angle:

Yeah. But even when you see the kind of cartoon models they have for how a channel might work, it's actually, it's surprisingly on point without having any structural information.

Mark Schnitzer:

Yes. But their studies were very mechanistic.

Matt Angle:

What about in terms of biological systems have so many free parameters that I mean, you can't possibly make a model that can adequately capture all of them. You have to have simplifying assumptions, you have to kind of lock things down and take a kind of frustratingly empirical approach to doing biological work. I'm not sure if you've encountered that from an experimental standpoint as well, in terms of all of the different ways that the experiments need to be controlled or the kind of black magic involved in ...

I don't know what the imaging field is like actually. If you pull a bunch of electrophysiologists together, they're going to of talk about their like, if they wrap the tin foil clockwise or counter clockwise and what color tie they wear when they're doing cell patch recording, is there that kind of voodoo in the imaging world as well or is that more enlightened?

Mark Schnitzer:

I think there is. I think a little bit has to do with the quality of the images and it's a little bit separate from the issue of variability and the applicability of various models. So I think there are some tricks that might be better described than they are presently in the methods sections of the papers in the literature. But the question that prompted this from about biological variability and the realm of applicability of models, I think, in some ways comes down to individual variations between, for example, mice and their behavior, the non-uniformity of cells, even of a given nominal type, as Jacob was saying, right?

And, the fact that all models, even in physics have a certain, a realm in which they're applicable and other realms in which they are not, I mean, classical mechanics being a sort of an obvious example, and there are many other examples. So, there are successful models in neuroscience, right? But I think the question of whether or not your system falls into the theories realm of applicability and exactly what are the boundaries of that applicability, are maybe harder to define in neuroscience and harder to enact in the laboratory than they might be say, in a physics experiment.

Elizabeth Hillman:

The classic approach is to say, I have this hypothesis, I need to design really clever experiment that's going to isolate all the variables and make a measurement that's going to answer whether or not my hypothesis is correct, right? What imaging has let us do is observe. And I don't know that there's been enough, just pure observation, right?

So, we have a phase in the lab that we really call hypothesis finding where we just collect a lot of data. And I'd say we spend 80% of our time in the lab actually doing image analysis, not collecting data particularly, but the rest of the time we're collecting data. There's so many things when you dig in and really ask those questions, you say, but do you actually know, do you know how long this process usually takes, or like, I've done a lot of work on vessel dilation in the brain.

And, we realized early on that, we just didn't even know how quickly did the vessels dilate and how far a did they dilate and, what were the patterns that they formed and were they always the same every single time you did a stimulus. Really fundamental things you need to know if you're going to come up with a model that's going to actually give you some understanding of how the system works.

But I think because getting data used to be so difficult, right, doing those experiments used to be so challenging, you had to repeat things over and over and over again, to get some kind of measurement, it was a lot harder. Now, with all the technologies marks, things we're doing, wide field, the 3D microscopy stuff, we have this luxury of being able to just observe and formulate these kind of parameters within which to generate hypotheses and build models.

We're relatively controlled in what we do but my approach is more like, we have these tools. We may as well do the things that are reasonably easy to do with these tools that people haven't had the ability to do before and sort of explore what's going on. So right now we're super into just what we call real time activity. What is usually called spontaneous, annoying background noise that we don't want to look at, but nobody's looked at it for a long amount of time and looked at how it relates to behavior and what it might actually mean in terms of the brain.

And we have the ability to do that. So I'm a little bit more sort of passive in our approach, right to just sort of observe and then try to establish the parameters, the benchmarks that we are then trying to fit models to. Once we have those, then we can do the hypothesis testing.

Jacob Robinson:

Thinking about your question, Matt, the comparison to electrophysiology, and we do a little bit of both in my lab, I don't do as much imaging as Mark and Elizabeth. But what I've noticed is maybe the difference is that when we're trying to do electrophysiology, we spend a lot of time up front to ... that's where the black magic is, right? Like, where do I stand in the room? And what lights do I have turned on? It's all prep to get the data. And then the data often is relatively easy to interpret. On our imaging experiments, a little bit different. I mean, there's still a lot of prep that goes into it, but then a lot of the tinkering and tweaking is at the back end. What do I do with this massive data to extract something that I think is meaningful.

Matt Angle:

Talking about imaging now, I wanted to ask you, where are we with optical imaging of neural activity, particularly population dynamics? So if I have access to a square centimeter of cortical surface, what's currently the state of the art for optical recordings in terms of the number of neurons that I can resolve and the temporal resolution at which I can resolve them? I realize there's not one single answer, but I'd be interested in hearing some of those answers, any trans genes, any optic schemes, it doesn't have to be implanted, open craniotomy, one centimeter of cortical surface. What can we see right now?

Mark Schnitzer:

It obviously varies a little bit depending on the conditions, right? So I would sense on the order of magnitude of a thousand neurons, but it varies depending on just to name a few factors, the promoter that you use to label the neurons with your indicator, right? So, are you labeling just excitatory neurons or are you labeling all neurons? How active the mouse is, if it's a mouse and what task you might have posed to the animal, depending on which area you're looking at, you didn't specify in your question, but if the behavior that the animal's doing is actually engaging the area of tissue that you're looking at, you are more likely to see more neurons active.

And then, depending on which cell extraction method you're using, you're more likely to pick up active neurons. Or the flip side of that is if you have a neuron that is detectable, that's there, but doesn't actually do much during your experiment, what value is it? It's not much to analyze besides the fact that it wasn't active. So, there are some details to be thought of. And these questions about, how many neurons are really approximate because they don't reference when posed that way, the value to the experiments and the value to the experimental conclusions that can be brought. But order of magnitude, it's approximately a thousand, some of our experiments, we've recorded over more area than and gathered many thousands just looking at subsets of neurons. So, I mean, it's a rough answer for you.

Elizabeth Hillman:

I would answer it maybe a little differently, which is, so we do widefield imaging, an example of which you can see behind Mark and we capture therefore, hundreds and hundreds of thousands of neurons. And I think for me, as I said already, the big thing that has me excited is the ability to get sufficient signal to noise to record dynamics in real time at high speeds.

So we really can look at really, what's evolving in the context of exactly what the animal is doing moment to moment. We don't have to average. I know a lot of neuroscientists who completely bulk at the idea that we are not getting single cells, right? But in fact, the richness of that data blows my mind. We've got 3D two photon scape data of hundreds and hundreds of neurons in a beautiful 3D volume single cell in the mouse.

And in terms of sort of the scientific questions, I think I can answer, I am so much more into the wide field because we are seeing these dynamics across the brain. We're seeing intracortical connectivities or correlations. We're seeing these patterns that are just, glaring out there. And in fact, the physicists in us look at this and say, well, if you measure 200 neurons, you're not going to see the same thing as if you measure the average of 10,000 neurons.

This is again, one of those things where I think there's this tendency to think we know what we need to measure and to push that. And so, we need 1000, 10,000, no, 20,000, but in fact, I think we should use all the tools that we have to kind of actually start to try to what might be interesting and important. And the other thing I'll put a shout out there for other cell types, as well as vasculature.

Mark Schnitzer:

To Elizabeth's point, if I might jump in, the scientific value of having, let's say 1000 neurons, it very much depends on what you want to do with the data and the question you're asking, and in many circumstances, I might rather have say 250 neurons from four different areas and try to understand how those areas are interacting than, 1000 neurons from the same area. I mean, it depends on the question, but having some more diversity in the data, different cell types, whether they be genetically defined or defined by their projection patterns, which is something we can do in imaging, is very important for determining what questions we can address. And it's really not just about the counts, which I think Elizabeth was saying.

Matt Angle:

And when we talk about this wide field imaging, where maybe you can't resolve single neurons, how many neurons do we think are contributing to a unit that we can isolate from other units? Or is there some way of thinking about that in comparison to-

Mark Schnitzer:

If I may, I think I would take issue with the premise of your question because you very much can resolve individual neurons in wide field imaging, and I'm showing you here on my Zoom background.

Matt Angle:

Oh no, sorry. But I think Elizabeth was making a point that perhaps a technique that captures more, but has a little bit lower resolution is more useful for understanding system dynamics. So I totally understand what you're saying, but I guess my question would be maybe posed to Elizabeth. When you talk about recording from hundreds of thousands of neurons, but not being able to resolve, what does that mean not being able to resolve?

Elizabeth Hillman:

My work overlaps with FMRI as well. I have friends who do FMRI, which I know is a dirty word in neuroscience, but the human connectome people, have used all this resting state FMRI data to define these minute regions of the brain. And anytime that you go to some atlas that's generated from the human brain, you've got all these different functional regions that are defined that are involved with your lip and your tongue and your fingernail and all of these different areas, right?

Elizabeth Hillman:

So we know that there are these functional units of the cortex. I can tell you that when we collect very high speed, very broad strokes, not sparse expression, but fairly dense expression like phi one, what we can do is we can look at which regions are coherent, what's the range over which you see correlated activity in these different regions.

And we pull out these nice little topographical regions that actually then correspond to those functional units of areas of the sensory cortex or pieces of motor cortex that we can then link to activity. So it's certainly true. If you zoom in on them, you're going to find a whole bunch of cells that are doing a whole kind of milieu of different things. But when you average them all together, they're showing you an average over this spatiotemporal unit of an increase or a decrease in their ensemble behavior. And I'm just saying that from a shot noise perspective, it's actually much harder to calculate what they–I always use the analogy of voting. If you were to ask 20 people on the street in New York City, who they were going to vote for, and then tried to predict the outcome of the general election, you'd be in big trouble. You need to have this average over this large population to get a sense of, in some cases, even whether activity is just going up and down.

And the fact that we see this coherence, the fact that we see these little functional units, and we see the way that they're correlated to other areas way across the brain in what they're doing, I can't back off from thinking that's important. It's definitely super interesting to go in and then say, Okay, of those maybe 10,000, 100,000, I don't even know how many neurons are contributing to that ensemble signal, what are they all doing individually and how does their behavior change from moment to moment?

But the fact that they are exhibiting these ensemble properties, I find super interesting. Because it's something that I actually know people who are going in and looking at one, two dimensional plane with their two photon microscope, and sampling 100 cells are struggling to predict whether the animal's whisking or not. Whereas, it's just there for us. It's just glaring in those more average signals. So it's perhaps a philosophy, but once I started getting data like that, I couldn't ignore it. That's all I'm saying.

Matt Angle:

But I guess, Mark, to your point, it seems that it is possible to collect very large populations with close to single neuron data. Maybe you could tell us a little bit about what's going on in your background.

Mark Schnitzer:

Oh, sure. You can get single neuron data and we've done things to verify that. For example, done concurrent one photon and two photon recordings, and you can basically follow the traces in each and things like this. So there's no question you can get single cell data with widefield or epifluorescence.

So in my background, you can see data from a subset of what we call the crystal skull. I'll switch it to this one, which gives you not a full picture of the crystal skull, but a bigger picture. So in this preparation what we do is we permanently replace the dorsal cranium of the mouse with a curved glass window whose radius of curvature is matched to that of the natural cranium. And if this surgical implantation is done well, the optical window will remain clear without any glial inflammation for months at a time, allowing you to come back repeatedly and image cellular dynamics across about 35 to 40 brain areas, depending on the details of the prep.

That's what we're seeing here. And then probably over the broadcast, you can just barely make out some of those tiny, visible flashes, which are individual neurons. This is a mouse expressing a calcium indicator in layer 2/3 cortical pyramidal cells, but if I switch the zoom background, now you can see a magnified view. So this is the crystal skull prep over here, a portion of it, anyway. You can see some of the vessels. This is the midline. And this inset is shown magnified over here. And this is a video of, delta F over F, the relative changes in fluorescence. And now, at this magnified scale, you can more clearly see the individual cell bodies that are activating, as well as some of the hemodynamics that Elizabeth was talking about.

Matt Angle:

And roughly, what are the numbers there?

Mark Schnitzer:

In terms of the number of neurons you can get?

Matt Angle:

Yeah, what's the size of the window, the number of neurons, the sort of speed at which we're looking at?

Mark Schnitzer:

Right. So it's roughly what I said before. The present limitation with the crystal skull actually has to do with the substantial curvature of this preparation. So you and our viewers may know that most people do cranial window implantation with a flat optical window. And so, as one attempts to image larger numbers of brain areas and more brain tissue, this means, generally speaking, that one is pressing down on the brain tissue and attempt to basically get that tissue essentially to adhere to the glass window. Right? But over the scale of the entire dorsal surface of the mouse cranium, the mechanical duress that this would place the brain under is probably not prudent. And so this is why we developed a curved glass window. But that means that the axial depth actually drops about 200 microns from the center of the brain to the periphery.

And it's that which actually prevents us from keeping the entire surface in focus at the same time, without badly sacrificing numerical aperture. So that may have been a little technical, but basically it's a long-winded way of saying we can't keep the whole thing in focus at once, at least for the moment, without sacrificing a lot of photons.

Matt Angle:

That's really helpful. I mean, probably from my perspective, and I think, a lot of the people listening, it's hard to keep track of what's going on the optic side of experimental neuroscience. Because recently there was a paper that came out on bioRxiv from Vaziri group. It was looking at light beads microscopy and imaging of, it claims, 200,000 neurons at five Hertz. And you're like, okay, light beads microscopy. That's the way to go. But then we have these other great examples of wide field. Still, a lot of people are thinking, well, if you're in the brain, you've got to be doing two photon. It has to be multiphoton. And it's hard to keep track with so many different modalities where this state of the field this right now and if you're interested just in population imaging, what should you build? And so that's what I'm trying to get a sense from this group like where is the field now? I know it's a lot to unpack, but can we make some generalizations about if in this situation, these tools are good, in these situations these tools are good?

Mark Schnitzer:

Before even trying to address the question, I think you raise a really good point, which is that there are a lot of options. There are a lot of claims out there, many of which have not been independently verified, right? So I also think the light beads technology is very interesting to get to the million neuron level. There is a loss of spatial resolution, which has not been independently verified to this point by other groups, to my knowledge, whether that actually preserves cellular resolution and perhaps in more detail, which facets of activity traces those individual cells might or might not be compromised.

Mark Schnitzer:

So one of the things that I think that is coming to the fore is that it's not just about cellular resolution. We also have to talk about the fidelity of the activity traces. And to what extent our scanning method, for example, may compromise photo times that may compromise the resolution of various types of spikes. That's one facet of it. And then there's also been some recent papers looking at the fidelity of the transduction of spike trends into calcium signals, if you're using calcium, for example.

Matt Angle:

Yeah. I was going to ask about that paper too specifically coming out of the Allen Brain Institute and looking at GCaMP Six, but I was going to wait and kind of throw that on you in a few minutes.

Mark Schnitzer:

But the larger conundrum, I think that you're raising, is very much true. There's a whole bunch of new techniques and it's not easy necessarily for people who've not been trained in optics to decide what's the right match between the question or set of questions that I'm interested in. And the optical techniques that I might consider. Maybe none of which are commercially available at the moment. Right? You know, in electrophysiology, there's obviously a range of tools, intracellular recording approaches, wholesale extracellular. For some reason, people seem a little bit more comfortable deciding which electrophysiological techniques they might use. Not always, there obviously are some subtleties there, but particularly in the optics domain, there's a limited number of groups that build their own setups and many more groups that I think maybe in the position that you described, trying to figure out, well, which direction do I go?

Jacob Robinson:

So Matt, I'm going to push back on you a little bit, because this is a pub thing, right? So I'm going to pretend like we're actually at a pub and you're not recording this. So I, to some extent, object to this line of questioning. In the sense is that I think counting the number of neurons is... I'm going to speak out of my depth. I just think it's the wrong way to think about imaging and maybe historically comes from the world of electrophysiology. Because that's what you could do. Right? You could measure spikes and I can make a probe that can measure from 10 neurons. I can make a bigger probe that can measure from 12 neurons. But the thing is, we're actually not talking about we're measuring. So those extracellular electrodes measure action potentials. And so that's what we count.

We're like, oh, I can measure actual potentials from a hundred different neurons. But what we're talking about here is imaging. We're not even measuring action potentials for the most part. We're measuring calcium, which we believe to be correlated to the average number of action potentials over period of time. And at what frame rate, right? So these aren't even one to one comparisons. And I think we can speculate as to why that is, but I think it's very hard to try to compare the raw numbers of neurons that you're getting from an electrical measurement to the raw numbers of neurons that are getting from an imaging experiment and try to make comparisons between the two.

Matt Angle:

I think even someone who's relatively nuanced and sophisticated about this, they're still saying like, okay, I want to understand population dynamics in motor cortex. I'm willing to make trade offs. Like I would potentially be willing to just look at single neurons with a hundred millisecond bins, sparsely sample, get couple tens of them. I'd be happy if I had an optical approach that averaged over potentially signals from hundreds of nearby neurons, if I understood exactly what I was getting, I could build a model around that. But I think that many people don't even know what they could get from optics. They just fundamentally don't even know what's available.

Elizabeth Hillman:

I'll pat in here. Okay. So firstly, there are lots of courses. I'm part of AQLM. I'm part of the Cold Spring Harbor Imaging Structure and Function in the Nervous System course. There's lots of places where people can go and learn some of these fundamental concepts. I think the first neuroscientist came from electrical engineers because they had to figure out how to do electrophysiology. So I don't think it's okay to just sort of say, well, it's not fair because biologists can't understand the optics. I mean, there's some fundamental principles here that are perfectly understandable. I think on the flip side as well, there's a lot of math and a lot of modeling and a lot of things that you really truly have to understand, because you can get yourself into a right mess even if you get all of this imaging data.

In terms of the imaging technologies, I haven't been blowing my horn here, but like we have a technology that is really good at doing exactly what you said. My approach has always been that actually we have to have a technology that is simple enough for people to be able to actually use. And I think one thing that tends to happen is you get technologies that get more and more and more and more and more complex to become more and more and more trying to do more and more and more things. But then in practice to actually use them to implement them, they're very, very expensive. They're very, very difficult to actually get to work. And I think all of that is driven by this mindset that you're coming to this with, which is saying tell me what to build.

And, and that's the other thing. And I'd like to bring Mark in on this because you said, tell me what to build. And Mark and I have had this conversation before and this is really hard for us as technology developers. Because yes, Bell Labs built the first two photon microscope, two photon was licensed so nobody could actually use a viable system, so everybody started to build their own two photons. And I would tell you that building a point scanning two photon is easy. I've had undergrads build it. We build them in a week on these courses. It's not that hard. But this expectation that people have in neuroscience that they can just build it, I think is also one of the really big problems. So I want to raise all of those points and see what my friends have to say.

Matt Angle:

And even electrophysiology is moving in the direction where you can't just build it. You can't just build your own neuropixel anymore.

Mark Schnitzer:

Right. If I could just jump in and maybe support both what Jacob and Elizabeth are saying. So to Jacob's point, and I think we've sort of danced around this a little bit, but let's just bring it to the floor. We're in the age now where we have a great deal more knowledge about cell types, that this is undoubtedly one of the major achievements of the brain initiative has been the elucidation, at least from a expression perspective, the different cell types. And so if we want to be able to understand what the new plethora of cell types are doing, we have to have recording technologies that are up to that job. And one of the beautiful things about imaging is that we can genetically target sometimes in multiple colors or with other tags. The cells that we are interested in trying to dissect actually what different cells are doing. And it can also be done by projection pattern, right?

So I think one of the stunning things, if you just visit the Allen Brain Atlas is all the different projection patterns that you can just browse through. And one of the real drawbacks of electrophysiology is that if I just insert, let's say neuropixels or other electrodes into the tissue, I don't really know the projection patterns of the cells that I'm recording from. And it's much harder to build models if I don't know the types of the cells and I don't know where they're connecting to. So that's a major strength of imaging. Now to sort of reiterate what Elizabeth was saying, there is something of a conundrum nowadays. There is the realization that we do need to support neuroscience, students and users with courses that will describe the principles of optics.

And you can build a conventional, say laser scanning microscope with some training, but that is distinct from being able to engineer and innovate a new generation of systems that actually pushes the cutting edge of the technology full forward and to build that in your own lab. And so standard technologies and helping people understand the basic principles, that we can address with courses, but it's not so easy actually to go from that stage to pushing the entire state of the field forward. So, for example, you can teach people the basic principles of wireless communication. That in of itself does not equip somebody to design the next cell phone. Right? And so there has to be a distinction between the state of the art and getting people to a certain level of literacy and competency.

Elizabeth Hillman:

Well, and so what we do in my lab is that we have all of these collaborations, we've been helping a lot of people around the world to build scape systems. And then the systems that we're not ready to help people to build, people come to our lab and we work with them and we spend vast amounts of time really working with people, helping them to figure out how to use the system. We have lots of conversations. What are you trying to measure? Well, this is what's easy to measure. Trying to find those middle grounds. And I'm an engineer embedded in a neuroscience institute and that's, I think, what you need. We need neuroscientists to have the education to be able to interpret these papers. To understand what it's going to take, to build one of those systems.

And to know that's beyond maybe what they should attempt to do in their new lab as an assistant professor. But maybe they should approach someone like Mark or me and say I'd like to collaborate with you. I'd like to use your new microscope because we need those collaborations. We need those connections. I very much appreciate hearing from people what their heart desires and working back and forth with them about well, what sorts of things could we do? What are things that are easy for us to do, like add more colors versus what are things that are harder to do, like internship at campus. So we can find those good matches between technologies and questions and everybody sort of work together. I mean, maybe we could talk about the culture in neuroscience that doesn't have a tendency to fully embrace the engineers physics types as their–

Jacob Robinson:

Intellectual equals?

Elizabeth Hillman:

Yeah. It's weird to say, well, just tell me what I should build. Right? If you think about how much work we put into building the things we build.

Matt Angle:

Even at Paradromics, we have a lot of engineers and scientists working together, and I think it's difficult if the basis for the collaboration is going to be well, you're going to have to learn my ship for like a three month course and then we can talk. Like you have to be able to communicate it a little bit of a higher level in order to have meaningful interactions. So it's probably the case that a lot of the best neuroscientists who are studying population dynamics won't buckle down and learn non-linear optics. But it might be that you could have a dialogue about what kinds of signals would be necessary to feed into a model. I'm thinking particularly, and I always I've referenced this paper too many times on the podcast, but I'm a huge fan of the work that Surya Ganguli is doing at Stanford. And Mark, you're probably familiar with it.

Mark Schnitzer:

Yeah. We collaborate with him a lot and actually we collaborate with a lot of biologists. So I fully endorsed the mode of collaboration that she was describing.

Matt Angle:

But I was thinking part of what he's very interested in is dimensional reduction for populations of neurons and looking at their dynamics. And one of the things that comes out of his work is a suggestion that you don't need single neuron work. And that if you collapse a lot of neurons onto a single channel, it's kind of like doing random projection, like a particular type of, Mark, stop me if I'm saying it wrong, but a particular type of dimensional reduction. And so there's definitely some applicability there too when you start relaxing some of the requirements for your microscope and you start effectively combining signals from multiple neurons onto a channel, so to speak. I'm probably using loose language, but you could probably have conversations that were really meaningful without necessarily the computational neuroscientist needing to understand optics or the microscope designer needing to understand the math.

Mark Schnitzer:

Yes. I mean, we collaborate with Surya a lot and I'm struggling to recall the last time he asked me an optics question, but we have a lot of common ground with regard to the neurodynamics. I think so jump in if I'm misinterpreting what you were saying, but I feel like you were raising also the larger issue of sort of how the field is structured, how physicists and engineers interact with neuroscientists. And in that domain, there are many things that I would suggest that we as a field reconsider and that we could probably do better as technologies and the whole paradigm for how we interact amongst each other as a field evolves. One of the things that I wish were discussed more frequently is figuring out how do we figure out which technologies are best disseminated in different ways?

So there's a tendency, I think, in the field to assume that every technology should be sort of freely disseminated across labs. Maybe one of the best examples of how we do this today is plasmas. Right? If I come up with a new calcium indicator, I can distribute the plasma and there's sort of an easy route for many, many labs to start using this. And that's been very successful. You know, for optigenetic actuators for activity indicators and so on and so forth. When we get into technologies that are more sophisticated, just making everything open access doesn't necessarily work. And you can see this even a little bit with some of the difficulties that users have had with neural pixels, actually to handle them properly with live animals and so on and so forth.

When you get into other technologies, like some of the complicated optical instruments, it doesn't necessarily work at all. And I think there should be, I would hope, a more nuanced discussion about what's the best way to make technologies available. And one of the ways that I think the neuroscience community needs to consider more is the centralized resource. So in structural biology, everybody's perfectly comfortable with having a synchrotron and the users come to the synchrotron. And the reason for that is that it's recognized that not everyone is going to have a synchrotron in their lab or their department or the university, and they're very complicated to run properly and to maintain. And that's an accepted fact that everyone's comfortable with. And we're getting to the point where, for example, we could easily have optical facilities that might involve several million dollars, $10 million worth of equipment, not the right thing for individual labs and departments to have, but as a nation of neuroscientists, we could easily build a very sophisticated setup and people could come visit and do experiments that would be otherwise out of reach.

And so there's some new paradigms for disseminating technology that I think we need to consider. And part of this is the issue regarding, let's say recognition and career progression, for people from an engineering and physics background. How do we get the best and the brightest into the field and recognize and reward their work without necessarily saying you have to get, say tenure, in a traditional neurobiology department.

Matt Angle:

Jacob, I think that would be a really good time for you to introduce the audience to the idea of computational imaging. On the topic of people who do imaging who know math. That's a little bit different paradigm. Could you tell people about that?

Jacob Robinson:

I think at the beginning, you kind of ask like, what's the magic, black magic, related to imaging. And I tried to argue that part of the black magic is what we do after we get the data as opposed to electrophysiology. And I think that leads into kind of the idea that we might be able to acquire information from an optical wavefront in ways that are a little bit different than you would get from traditional microscope where you project an image onto your sensor. So if you could collect amplitude and phase, basically, then there's more that you can do to extract information. And that's the bottom line is that I think there are clever things that we can do to extract information from the signal that's different necessarily than trying to image an individual.

And what that allows you to do is build imaging systems that don't necessarily look like microscopes that could capture complex sensor data. And then from that redirect images and three dimensional volumes from a single shot. So I would say that's kind of the big idea is that there's clever things that, and Mark's done some of this stuff too, where you can capture volumetric information, et cetera. I guess that's my long way around. I'm not really sure the question you're asking.

Matt Angle:

Well, I mean, I'm by no means an expert, but one of the things I found very interesting about computational imaging is that you could use a relatively simple or shitty microscope. I think of Laura Waller's work at Berkeley, she's imaging through a piece of scotch tape as a diffuser that it's a relatively low tech, low investment on the hardware end. And that kind of gets to maybe a little bit what Elizabeth was saying is that it's a different paradigm instead of just trying to build the best lens with the best image sensor and the best signal to noise and the best field of view that there may be ways to get the information you want through simpler devices.

Elizabeth Hillman:

Well, I would add though that there's a lot of caveats to the computational imaging side of it, right? Which is that you need to actually have an inverse problem that's going to solve for what you need to measure. So you need to, again, very carefully match that type of imaging to the question that you're trying to ask in the specific sample you're trying to do it in and you need a reasonably good computational side to it for it to work, in my experience.

Matt Angle:

Many people listening to the podcast won't know about inverse problems or condition numbers. Is there anyone who could try to give a brief explanation for what an inverse problem is and how you tell if you have a ill condition problem or one that you can solve with your system?

Elizabeth Hillman:

In broad strokes, I'd be happy to. I don't know what pains Jacob has with the computational imaging. I did my PhD where we were doing a lot of very ill-posed inverse problems, trying to predict structures from measurements. I mean, the general idea is you say, well, I know that if I can gather enough information about this system, that the information is then there, and I can computationally pull it back if I have knowledge about where that measurement came from. So like back projection in CT is a good example of it where you say, "Well, I have a three dimensional object and I'm going to make projections through it from one direction and another direction and another direction. And from those different projections, I can intuit and infer what's present inside of my sample, even though my measurements don't look like a cross-section of that image."

But then mathematically, you have to look at the information content of each of the pieces of information that you collect, and you have to know the geometry in which they were actually collected. And if you don't know either of those things, then you can't get your image back.

Now, a lot of stuff is come along, machine learning, AI and things like that have made it a lot easier. It's not so sort of fundamental now. It's just like a linear mathematical equation that's going to give you back your image. You can do more things now to fill in some of the information that you may not be getting. But yeah, it's just... Was that even clear? I don't know.

Matt Angle:

I think we could get even more fundamental for people who are trying to understand the idea of reconstruction. I think it's difficult for a lot of people who aren't in this to understand the difference between being model-limited or being signal-to-noise limited. Again, coming at this from the side of a neuroscientist who really wants to wrap their head around tools and optics, a basic intuition, I think.

Jacob Robinson:

The way that I tend to think about this is more on a practical level. So if we want to imagine what a computational microscope would look like, it's simply a sensor. Well, there are many implementations. Let's pick a very simple one, right? You have sensor, it measures junk. You have a phase mask or an aptitude mask that causes some transformation on that incident weight front. So you capture something, it doesn't look like an image. And then that you have the inverse problem that Elizabeth was describing. The way that we–

Matt Angle:

And that's to reconstruct the source based on what you've recorded.

Jacob Robinson:

Exactly. So I measure a bunch of junk and I know it came from something. And presumably there's information in what I measured and that information should be enough for me to estimate what that source looks like. But this is a really hard problem. It's often ill conditioned, as you mentioned, meaning that you can't be sure that your solution actually looks like the thing that produced that image. It could be many things that produce the same pattern that you captured on your sensor. And so what we do computationally to estimate what that scene looks like is we take a guess. We have a forward model because we know how light would propagate through our mask, onto our sensor. We compare that to what we measured. If doesn't look good, we tweak our estimate. And eventually we converge so that the pattern that we estimate that we captured matches the thing that we actually captured. And we say, "Okay, that is what we think our system looks like."

Now this would be a extremely difficult problem unless we know something about the scene like Elizabeth said. And oftentimes what we assume is that there are statistical properties of the scene that are true for a biological sample, and it's usually things like sparsity. So we make regularizer. A sparse regularizer will help that converge onto a picture that looks a lot like our ground truth.

And for the most part it's successful. There's always a trade-off though. You always lose something in this and it's usually signal quality. So you get noise amplification during this reconstruction process.

So for these computational microscopes, they usually look shittier than optical microscope, but you gain something. And it's usually you gain something in terms of form factor. I can make it really small and light-weight. Maybe I can cover a large field of view. That would be hard to do with a conventional lens microscope, but there's always these trade-offs. The interesting thing is that these trade-offs are computationally or mathematically defined, and so it's not a fundamental like ray optics or Gaussian optics trade-offs. These are mathematical assumptions. So we can improve our algorithms over time and we don't know what the limits of these kinds of approaches will look like. And that's exciting.

Mark Schnitzer:

Matt, if I can jump in, I like your question a lot, because I think it touches on almost philosophical issue of what is an image in science? What is an image? And I think that notion has actually evolved as science has progressed. I think early on, an image was a pictorial representation, analog, not digital. Later on, it became digital in the sciences.

And then there's sort of a third class of image, which is reconstructed as Jacob and Elizabeth were saying, and allows us at the end of the reconstruction to come up with a hypothesis for the events that actually may have taken place at the sample, to generate the data that we did collect. And that data doesn't have to look anything like a representation of the specimen as it would have in the first two classes of images, analog and digital representations, right?

And so this issue of being ill-posed is really a question of like, "Well, how many scenarios in a sense can I reasonably reconstruct?" I mean, I think many of the listeners of the podcast may be familiar with this issue just from regression and multivariate regression. Sometimes the question is not well posed. There are multiple scenarios. And then we can add regularizers, as Jacob was saying, to try to winnow down the space of possibilities and to come up with a reasonable proposition for what was actually at the specimen.

And neuroscientists, I think, are intuitively often familiar or with this issue when they compare EEG versus MEG. So I think it's well appreciated in the neuroscience community that EEG does not allow one to come up with a solution for what the generators may have been in deep brain tissue, but MEG is much better for that, and has to do with the differences in the inverse problems.

Elizabeth Hillman:

This brings us back a little bit to what we were talking about earlier on which is this sort of abstraction. It's saying that you can rely on these kinds of techniques when you have a pretty good idea of the system you're looking for and what things you need to hold onto and what things you don't. So you can get to a point where if you say, "Well, look, I know I have a thousand neurons here and I know that they're reasonably sparse. And all I really need is the signal coming from each of them." If you're in that position, you really, as an engineer, we look at that and go, "Then you don't need 3D imaging. Right?" Find a way where you can say, "Okay, I'll use the fact. I'll use the a priori knowledge of these things that are fixed to then get away with not having to do all of those extra pieces. And so then I'll reconstruct, I'll pull out the information from this data set that doesn't look like I was expecting, but it gets me the information that I know."

The risk of doing that is that you then find the information you were looking for. You're not necessarily going to find what you weren't looking for. So another technique that's analogous here is the random-access two-photon. That's another approach that people have been using to try to measure lots of different cells all at once. But to do that, what you have to do first is decide which cells you're interested in. And so the approaches that we take, like with SCAPE, is we image everything. When we image everything, there's a heck of a lot of stuff to look at and to see and to analyze and to interpret and to figure out what we're seeing. But because we are imaging all of it, we're seeing what we're missing if we go to a subset. But that's a good first step when you say, "I'll look at everything" and then afterwards you can say, "Well, okay, if I now know what are the things I really need to focus on here, oftentimes then you can go to some level of abstraction and say, 'Okay, now I don't need to have the fully 3D projected head of this C. elegans worm. I'll sacrifice that because I know what I'm going to get, but now I can image a 100 times faster using computational imaging and I can get those little transients that I really now am interested in.”

Matt Angle:

Elizabeth, can you explain how SCAPE works, to the audience?

Elizabeth Hillman:

SCAPE stands for Swept Confocally-Aligned Planar Excitation microscopy which is like... It's a cross between light sheet microscopy and confocal microscopy. Light sheet has traditionally been used for imaging things like cleared samples, where you bring a sheet of light in from the side, and then you focus your camera onto that sheet of light. That allows you to get optically sectioned images, just like a point scanning confocal or two-photon would, but it gives you this kind awkward geometry, where you've got to somehow bring your sheet in from one place and your detection in from the other. So our design brings the light sheet in from the primary objective lens on an angle, and then images that sheet back through the same lens. So it means that you can collect sheet like images in a standard microscope type geometry.

The really important thing about the method is that when you do point scanning, and here's my little physics lesson for those people who don't know about two-photon, when you point scan a two-photon microscope, you're visiting one point into and one after the other, and you get very, very little time to spend at each individual point and your scanner has to move really, really fast in order to be able to sample a volume. And so you reach this limit of just not getting enough signal, not being able to capture the whole volume.

When you illuminate a whole plane all at once, you're actually imaging all the pixels in that plane all at the same time. And you can project that onto a camera, which can integrate signal for the entire time that's illuminating the plane. So we're really, really light efficient. We get way more signal for the amount of photon that we put in, then you would get for your point scanning and you don't have to move everything around really fast.

The other advantage of the system is that we have a way of scanning the sheet from side to side and it de-scans that movement onto the camera. So there's only one moving part, which is a single mirror. And it scans the sheet from side to side and allows you to collect the three dimensional volume really quickly.

So we can image up to 300 volumes per second, but still with integration times and photo bleaching rates that are far less than you would get from a standard confocal. And we've applied it to everything we can think of, from small organisms like worms and fish and flies and fruit fly larvae to imaging the mouse brain. And we have this two-photon SCAPE right now, that's working to do exactly the stuff you were talking about earlier on in the mouse brain.

It's quite a simple technology. It's something we've worked really to disseminate by helping a lot of people build it, including Jacob. And we've also licensed it to Leica, and have been working with them for quite some time to hopefully get this launched as a commercial product.

Matt Angle:

A lot of the audience is interested in brain computer interface technology, and therapeutic applications or trans-humanist applications of BCI, whatever their particular camp is. What do you think we'll see in terms of translating some of these powerful optic systems into implantable brain computer interfaces? Are there ones that are very promising and are there fundamental challenges or is it more kind of hard engineering work? What do we see over the next years?

Mark Schnitzer:

Well, many of the techniques available today use an indicator that's been expressed in a genetic manner. So there's a large community of people working on essentially gene therapy methods for the human brain. And people have been trying to do this for a whole variety of reasons: Cancer therapies, therapies for a wide set of diseases and even optogenetics in neuroscience. And so if we were able to achieve safe and reliable techniques for expressing exogenous actuator or indicator in the human brain, it would really be a game changer for optical BCI. That would really propel things forward.

And I do think that if we could achieve that, and of course people in the gene therapy field are working hard on the necessary technologies, optical BCI could potentially have some key advantages over electrical BCI, namely that we could potentially target cells, as I was saying earlier, of known types, of known projection patterns. And we could interface with them stably over long periods of time, essentially simply by registration or registration of the data in analog analogous fashion. And some of the hurdles that have impeded electrical BCI and, Matt, here I'll defer to you, such as gliosis around the probes and so on and so forth, may not be as much of a concern in optical BCI. So if it were possible to express indicators, that would be a big step forward.

Now there, of course, are optical approaches that don't rely on indicator expression, and that use other kinds of intrinsic signals, but they may or may not have the specificity to make them competitive with electrical BCI. That's an open question.

Matt Angle:

Let's imagine that there's a GCaMP similar to GCaMP6 that is safely expressed in humans and regulators are comfortable with that, what could we see? What approaches do you think would lend themselves well to optical BCI?

Mark Schnitzer:

Well, I mean, Matt, you and I were a part of a DARPA program where there were multiple--and maybe Jacob, maybe you were part of the same program? I don't remember, but one of the advantages, if we had those safe gene therapy approaches available is that we could, in principle, use them for both optogenetic actuators and activity indicators and we could potentially sample or read information, record from cells and also manipulate the very same cells, at least in principle. It would potentially allow a single cell specificity that might be very valuable.

And I should say, we've been talking a lot about the virtues of ensembles and population averaging here. I must say that one of the things I'm really always very impressed about by the brain is actually the cellular precision. I'll give you an example of this. So in the amygdala, we find a subset of cells in the basolateral amygdala, about 10%, that are involved in the affective experience of pain. And by silencing just these cells in the mouse, it seems that you can essentially cure the animal of the symptoms at least of chronic pain related behaviors. These cells are interspersed amongst the cell population in the BLA, and you have to be able to address them with that kind of specificity. You can't shut down the entire BLA, it wouldn't be a viable therapy.

So in many cases, just being able to access a small subset of cells can actually be incredibly powerful. And I think it's not hard to imagine in many brain areas how the single cell precision of reading information and writing information would be incredibly useful.

Matt Angle:

It would be extremely hard to do something like that with electrical stimulation.

Elizabeth Hillman:

Well, I would just say, and this has been the story of my life between work I've done on cancer as well as now work in neuroscience that we always get this question. They're like, "When is this going to work in humans?" And I have to say my piece, which is that we have these incredible tools in animals. We do a lot of work, as I mentioned, trying to straddle between neuroactivity, cellular activity and vessels and hemodynamics because we have functional MRI, which is a viable human imaging method. But the problem with it is we don't really understand what the heck it's trying to tell us.

And what Mark just said, that he's identified this small population of cells that seems to have this incredible effect. Well, we wouldn't have discovered that if it weren't for these optical technologies. I just want to underscore that. I've had many neurosurgeons come to me and beg me to go, and we've done and we have a paper on doing intrasurgical imaging and they keep coming back and going, "Oh, can I cut where you drew that line?" And I'm like, "No." Because optics is just incredible for working in these animal models and this gift of GFP and optogenetics and indicators is just so versatile. So I really see our role as doing everything we've been saying this whole time, coming up with these mechanisms, coming up with these governing principles, coming up with these understandings of why does deep brain stimulation work? What kinds of codes should we be playing into the cortex or the retina to have someone perceive something, and doing that in animal models, using this rich range of tools that we have. I could conceive of ways in which you could then take that knowledge and implement it in humans, but not necessarily needing to have you use the same technologies.

But without this knowledge, I mean, so much of clinical medicine, as far as I've seen, is just try it. This seems to work. We should do this. And I see our role as trying to figure out why those things work, why they don't work and refining the approaches in humans based on what we can learn using these tools in accessible models.

Jacob Robinson:

I agree with Elizabeth almost all of the time, but this is one of the areas where I'm on the other side of this issue. As Mark mentioned, I'm also on a DARPA program to look at brain computer interfaces. And I'll backtrack a little bit. I agree completely with what Elizabeth said in terms of the use of optics for animal models, but I also see a path forward toward human applications. And that's my--I'm an academic and have the best job in the world. I can think like 30 years in the future. I don't have to build a product today.

And I can imagine a scenario where we have these indicators in humans, at which point building an optical BCI, I think, would be realistic goal. And I think this maybe ties back into some of the conversations earlier, where if we knew what we should be measuring to build an optical BCI, we maybe we don't need cellular resolution. I think it's highly likely we don't need cellular resolution. So we don't need to build a SCAPE microscope for someone's brain. We need to build a shitty microscope and... This is what our approach. Build a shitty microscope, but make it really small and thin, put it on top of the brain and you can measure ensemble activity. And that might be enough for a BCI, with a lot of advantages. Cell type specificity, like Mark mentioned.

The other thing that I really like about a potential optical BCI, I think Mark, you alluded to this idea of you obviate a lot of the challenges of electrophysiology. One of them is explantation, gliosis, lots of things that are really, really hard for chronic brain computer interfaces. It will also be hard for optical brain computer interfaces, but it's a different engineering challenge. Maybe it's an easier one. So I think there's definitely... I get really excited when I think about this, but again, it's really limited by when we get indicators in humans.

Matt Angle:

To what extent–

Mark Schnitzer:

Yeah--

Matt Angle:

... do you even have to do the source reconstruction if your goal is just decoding? Could you plug the raw signal?

Jacob Robinson:

Yeah, I think so. I think the information is there. And this is again, what are you trying to measure? If you're just trying to measure information, you don't even need to do reconstruction. I think it's helpful to do reconstruction early on to know what you're looking at and in a troubleshooting phase, but long term, if you just measure changes in fluorescence dynamics, that might be enough to build decoders.

Mark Schnitzer:

Yeah, I think one of the potentially attractive things of the BCI application is that in many cases it's well defined. And we know that for fulfilling that application, we wouldn't necessarily have to actually improve our understanding of the brain. We could basically train a device, for example, use machine learning approaches, to accomplish what we wanted without necessarily providing fundamental insights into what the brain was doing. So they might help, gaining some insights might be useful, but they're not required in an obligatory manner for the success of the application. And so you can measure signals and use them as you will, as Jacob was saying.

Jacob Robinson:

And this kind of ties into what I was talking about earlier, in terms of computational imaging, then we can sacrifice a lot of the high quality op. I mean, there's other approaches. Of course, many people will want high resolution images, but in some cases you might be willing to sacrifice that for the case of these small form factors, larger fields of view, like you can imagine having lots of these things to cover larger areas.

Matt Angle:

Aside from the regulatory approval and knowledge that your gene delivery and gene expression is safe, what would be the technical things that you think would push this forward? Would it be fluorescent proteins with better dF/F, would it be better sensors? Would it be better packaging? What do you think could happen that would make optical BCI, just a slam dunk in a short period of time?

Mark Schnitzer:

Well, it depends what you define us this short period of time, but we've been talking about some of the challenges for BCI, and safety, obviously, is a paramount concern and there are many dimensions in which one has to think about safety. I think most of us are probably familiar with why gene therapy has not yet been considered routinely safe. Right? There have been patients that have died in experimental trials, there are concerns about potentially causing cancer if you're disrupting the genome, et cetera, et cetera, right?

In the specific domain of thinking about optical BCI, there's not only the gene therapy aspect, but there's also the question of, well, what actually happens to my neurons if I am expressing a GCaMP over years. How safe is that? And we know that there are some concerns in our experimental animals, right? Over these kind of very long time periods.

And we don't really have proof, for example, in a systematic way that GCaMP would be safe to express, for example, over a decade. Now, there's an argument to be had about the pros and cons of such a strategy, right? Meaning, for example, look, if somebody's not using their visual cortex, there is an argument for giving it a try because those neurons are not being put necessarily to good use at the moment anyway, right?

So we can have those kind of arguments, but, obviously, safety is going to be one very important concern of the FDA. Now, the virtues of using an intrinsic optical signal is potentially that we can avoid the gene therapy discussion, we can avoid those safety discussions. And nevertheless, there's still going to be enormous safety concerns to clear.

Matt Angle:

Is there really enough intrinsic contrast associated with neural activity?

Elizabeth Hillman:

That's the point of the hemodynamics. The neurovascular coupling that I've mentioned. That's one signal that's been fueling the FMRI world for 30 years now. It's there. It's quite useful. It's quite helpful. In a normal brain, certainly, you can definitely do a lot with that signal.

Matt Angle:

For real time decoding application, the time constant is probably on the order of a second or seconds.

Elizabeth Hillman:

The hemodynamic response to a neural impulse starts within a few hundred milliseconds, but doesn't peak for two or three seconds, and then comes down to a baseline. I don't know. This is, again, where we need this fundamental research to actually understand what information we need to do what we hope to achieve.

Mark Schnitzer:

There actually have been a couple, there's been a few DARPA programs focused on BCI. I think, Matt, the one you and I were in focused on cortical prosthetics, but there have been other programs focused on other applications. And I believe that one of them was focused more on issues of mood. Sometimes the second is not so bad, so maybe I should hand it back to Elizabeth and…

Elizabeth Hillman:

Well, rhythms, circadian rhythms, and depression, modulating OCD, things like that. The fundamental dynamics of it get back to compress sensing. If the hemodynamic response is convolved with a neural response, there is some high frequency information in there. The question is, just how do you harness it? How do you actually build a model that's going to give you some sort of predictive value? How do you have to pre-condition those signals?

Maybe measurements from multiple adjacent regions gives you something that's useful and helpful. But again, it all depends on what you're actually trying to do with it. Every student I've ever had has wanted to build a robot arm that was controlled by somebody's brain. I want to think much more sophisticated about what we want to do. What Mark said about pain is so important right now, controlling pain, getting everybody off opioids is so important. Those kinds of interfaces don't necessarily need to allow you to pick up an egg and not crush it. Right? And I think we need to be sensible about that.

Matt Angle:

You mentioned a fast signal that's convolved with the slow hemodynamic response. Can anyone talk about that? Oh, there are some fast optical signatures associated with neural activity that don't have to do with the hemodynamic response, but my understanding that is that they're not well understood mechanistically, but I'm also not an expert in this area. I don't know if anyone knows that more about that.

Mark Schnitzer:

That's true. These have been, people are aware for a long time that these signals exist. That, for decades, they've been ascribed to subtle changes in the index of refraction, subtle changes, for example, due to osmolarity and the exchange of ions that may company activity in both neurons and glia. They tend to be rather small in magnitude. And so, questions of signal-to-noise are paramount.

Elizabeth Hillman:

And they can be ambiguous to measure. It's not easy to measure scattering changes. It's been around. People did it in giant squid axon, right? I think this was some of Larry Cohen's fundamental work and others. You have a big fat nerve and you stimulate it, it changes, looks a little more pearly white or so.

I was involved in some of this very early on, trying to figure out how to measure that signal and what to make of it, is still a bit of a dark art. Measuring it noninvasively in humans has been not very successful, but what I was actually saying is, to get back to math, the hemodynamic response is a gamma function, which means it actually has a sharp, rising edge. So, although when you convolve with it looks more like sort of a Gaussian, it smooths things out. There is a little bit of fast information in there. That's what I was referring to.

And again, it comes down to information content, whether there's measurements that you can make that would be sufficient. But again, it's always horses for courses, particularly with optics. I don't know if that's an expression people know from, that's my Britishness coming out, but this was in my thesis. Optics is not like MRI. It's not just one modality where there's a lot of diversity to MRI, but you can put someone in an MRI scanner and you can pretty much look at any part of their body and figure out what's wrong with it. Optics comes in so many different forms and it's really, really important to make sure that you match what you're trying to do to how you try to do it, in every case.

Mark Schnitzer:

Well, and then to sort of riff on that a little bit, there also are frequency dependent effects that may not have been fully exploited yet for BCI purposes, for example, right? So if you go back to some of those classic papers, and recent papers, and intrinsic optical effects, there are a number of physiological processes that people are looking at. Blood volume, changes in the oxygenated to deoxygenated hemoglobin ratio, Flavoprotein events in ADH. And maybe I've missed it, but I don't recall having seen anyone really attempting a very concerted, multi-wavelength approach to BCI that might capture, across the frequency spectrum, all the different types of information that might be in there.

Elizabeth Hillman:

You mean wavelength?

Mark Schnitzer:

For example…

Elizabeth Hillman:

We've done studies using Flavoprotein as well as hemodynamics, and because we can simultaneously measure neural activity and blood flow in these wide field experiments, we always flip back and forth between what are we learning from the neural signals? And then what can we then extract from the hemodynamic signals that we've simultaneously collected?

When I started out 20 years ago, every experiment we ever did in neuroscience was an average of 50 trials, right? We stuck electrodes into the core of the animal and we went zap, zap, zap, zap, zap, and then we averaged them. Even now, motor experiments, it's train your animal to do a task so they can do the same task 50 times so you can average it. This is why we are so obsessed right now with this moment to moment. How do you interpret the signal that you're seeing right now? Maybe in the context of what you saw for the last five seconds.

There's so much more in that I don't think we really have a good handle on, that I think will play into this, that you'll develop models that will then be predictive, and will be able to interpret the measurements that you're measuring now in terms of what's happening next or what the person is wanting to do. And that's not going to come so naturally from these results that we've had in the past that have told you, if the brain were a robot, and it did the same thing every single time you did something to it, it would do this. Right?

But what we now need to understand is why is it that sometimes when you flick an animal's whisker, it runs? And sometimes you flick its whisker, it doesn't. Those are the kinds of questions we're trying to get at by trying to piece all of these pieces of information together.

Mark Schnitzer:

I should add that these kind of approaches wouldn't necessarily have to be just multiple excitation wavelengths, right? I think there's some spectroscopic analyses, right? That could be potentially very useful. And also, you pumped probe approaches, I think have not been fully exploited. Maybe people are exploring these for BCI, but I'm not fully aware of it.

Matt Angle:

Can you explain to the audience what that is? A pumped probe approach.

Mark Schnitzer:

Pumped probe? Yeah. So, this is quite common in, say, spectroscopy in chemistry departments, right? Physical chemistry. One can induce a particular optical excitation with one pulse of light, called the pump, and then probe with the extent to which that excitation state has been successful or has evolved into a different state, using probe beam that follows the pump with a defined time interval. And then you can vary that and you can try to get more chemical specificity, in some cases, if you're trying to image using a Pump Probe method because you can try to isolate one particular process.

Elizabeth Hillman:

Those setups tend to occupy large portions of optical tables. Do you think you could build something like that in a more compact space?

Mark Schnitzer:

Well, it's a very good question. So, the relevant time scales tend to be on the nanoseconds scale, right? And it should be mentioned that there are analogs of these approaches in the frequency domain. And so when you start thinking about it from that perspective, and thinking about phase shifts, and all the progress that's been made in semiconductor light sources, I don't think it's out of the question.

Matt Angle:

Interesting.

Elizabeth Hillman:

Right. So, Stimulated Raman Spectroscopy or... I'm not even going to try and…

Mark Schnitzer:

SRS.

Elizabeth Hillman:

SRS has Stimulated--I always get muddled up!

Mark Schnitzer:

Anti-Stokes Raman.

Elizabeth Hillman:

Coherent Anti-Stokes Raman, right. The raman spectrum, it's a vibrational spectrum of a molecule as opposed to a fluorescent spectrum of a molecule. And so, you actually, if you can tag into that, you can potentially look at things like concentrations of neurotransmitters. There's some really beautiful work by one of my colleagues Wei Min, who has a way of altering glucose slightly by adding a little group that makes it slightly heavier, so that then you can actually create a resonance in an empty space of the raman spectrum and then pick up this glucose, which is basically exactly like glucose, it's just a little tiny bit different. They call them label-free approaches.

And there's definitely other ways to, sort of, make molecules sing and listen to them. There is some development of SRS Microscopy for doing neurosurgery guidance. So they are working on miniaturized versions in a very similar way to say how you might expect two photon to head towards maybe clinical use. It's not outside the realm of possibility that those kinds of contrasts mechanisms could be picked up on.

Matt Angle:

I feel like I've asked a lot of the wrong questions today. What, what do you think we should have talked about? What do you think young neuro technologists, who are interested in knowing the state of optical readout of neural activity, what would you like to tell them about it? In the remaining time that we have here. Or what do you think we should discuss?

Elizabeth Hillman:

Just imagining a bunch of young neurotechnologists, is a cute idea.

Mark Schnitzer:

I can sort of relay what I tell people in my own lab, which is, there's so many different possible instruments that one could build, that it is really helpful to get as good a grasp as one can on all the different facets of the experiment. Right? So to try to become acquainted with various aspects of animal behavior and the challenges there. The labeling technologies and how they work. The data analysis. What's the state of the art, in terms of processing the data, maybe after you've extracted the neurons, right? What's going to happen that's actually going to allow us to adjudicate the hypothesis, right?

And so, I think the more one can become familiar with the entire workflow of neuroscience experimentation, and the questions that are being asked, and developing your own interests in some of those question and pursuing them, I think will be very helpful as a guide to determining the kinds of technologies that you'd like to develop.

Elizabeth Hillman:

So I would go further than that and say that, I've seen a lot of faculty searches in neuroscience fall over themselves to hire people who had the computer science background or the physics background or the engineering background. And I have written training grants over the years to do, as I said earlier on, which is to not even see a divide between, "Oh, you're a technologist and you want to work on neuroscience problems. And you're a neuroscientist and you want to know about technologies." I think to be a modern neuroscientist in the age of the Brain Initiative, you need to have this diversity of training, and you need to have that crossover so you can understand all of the pieces of it. I think that the thing that's holding neuroscience back right now is this divide between, "Well, I'm an experimentalist, and you develop instruments, and you do the analysis and we don't need to understand each other." Right?

And I've always joked that having divorced parents was great experience for me in doing interdisciplinary science because I got very used to having to explain to one group, "It's okay, it's okay. They didn't mean that." Trying to explain it there and then go back and explain it on the other side. So I've always sat in the middle, and I love the fact that I'm always learning, right? I'm always trying to understand what is the biological question that this person's trying to tell me that they want to understand? And then I get into it and then I'm actually wanting to study that myself.

So, I think we should look to a future where you don't have someone that identifies, "I'm a neuro technologist or I'm a neuroscientist." But that we just train the next generation of people who are trying to understand the brain, in all the kinds of things that you need to know, and you need to learn to be able to leverage these technologies, whether it's to improve them.

The other thing I would say is we need to fix the culture. I don't know, again, if Mark and Jacob fully agree with me, but I still think that there's a little bit of a disconnect in appreciating the complexity and the value of the technology piece compared to the neuroscience piece, and sort of leveling that playing field so that you really can get these collaborations. So, it doesn't have to be one lab saying, I need to just build your microscope and scurry off and do some experiments, but that we should have equal numbers of people with these different diverse skills working together in groups, and having a real focus on the results that we want to get.

Mark Schnitzer:

Well, I certainly agree with what you were saying about the collaborative aspect of things, but there's one thing you said that made me a little nervous, and maybe I was misinterpreting it, but that people should maybe think less about specializing in calling themselves, say theorists or experimentalists or technologists. And the reason that makes me a little nervous is that, I look at what has worked in traditional physics disciplines, like particle physics, where it was actually very helpful to have different groups of people who were specialized, and focused on different things, and they were collaborative. And so, I really like that aspect of it. And they were sort of on equal social footing, if you will.

But having domains of expertise where there really were people who were allowed to specialize, and to gain the proficiency that specialization permits, was really critical for the advancement of particle physics. And I tend to think that I think we do need a culture change in the sense that we need to honor people who are specializing, and provide them routes for career progression, and allow them to be on, say, on equal terms within a collaboration. But I do think that, as we go forward, and technology has become more advanced, that specialization is going to remain important.

Elizabeth Hillman:

I agree. And if I look around at my lab, I always explain this to people. I say, I want you to come out knowing how everything around here works, right? If you tell me, "I can't, I'm a biologist, I'm not changing that hard drive." Like, no way, you've got to be able to do all of these things, but everybody has a natural talent or they come from a background of some level of specialization or some people just like doing some things more than others. And I fully embrace that. That's no problem.

It's just the silo. It's just the like, "That's not my business. I don't have to learn that. Or I don't want to learn that." I think people need to understand that to be able to do this stuff, you have to... All three of you know how upsetting it is when somebody's done some analysis on their data and you have to tell them that it's just not right. It's just noise. That they've misunderstood. Right?

We just need to get past that because otherwise if somebody tried to do come computational imaging after this webinar by putting a piece of tape on the front of their camera and collecting a bunch of data and being like, "Yay, Jacob told me this is going to work." That's not going to work, right? So we've got to just, I think, embrace that training. Jacob, it's your turn.

Jacob Robinson:

Thank you. I agree kind of with both these, what you said, Elizabeth and Mark. And I take a slightly different perspective because I think my field more closely aligns with neurotechnology than neuroscience, but if you were to place the words, I think it's very similar. What I heard from you Elizabeth, a little bit, is that it's important., I think, for the technologists to really deeply understand the problems that they're working on and vice versa, to some extent, right? The people who are designing next experiments have to have some appreciation for what's possible from the technologies that are available or what might be possible. So I think the interdisciplinary training, obviously, is important. And I guess maybe what has worked reasonably well for us is for the technologists to work side by side with the people who are trying...basically your user, right?

So however you define your user, as a neurobiologist, that's a clinician, it's incredibly important that you work with your user. I think the danger that we run into is when engineering gets divorced from either the problem or the user base, and then we sort of start making stuff, or like, I can make something that can meet some performance metrics, but if that's not important for your user, I'm not sure that that's a good use of our time. And maybe the trade offs that you made in building your next widget really make it so that it's not actually very practical. So, if you have the opportunity, young neurotechnologists out there, find places where you can work with the neurobiologists, the clinicians, so that you're tailoring your work toward solving a problem.

Elizabeth Hillman:

And teach them and learn from them. Get that exchange of knowledge. I would agree. The things that have been effective in our lab has been when you've had really equal buy in from both groups. And you've really had people side by side working together, seeking to understand.

Matt Angle:

Jacob, Mark, Elizabeth, thank you so much for spending the time with us.

Elizabeth Hillman:

All right. Thanks so much.

Jacob Robinson:

Thank you both.

Elizabeth Hillman:

Thanks guys. Take care.

Mark Schnitzer:

Thanks, Matt. Thanks everyone.

Follow us

Have questions?

Get in touch
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.