Optical control of neural circuits, the wave, and knowing whom to tickle

Episode 12: Mind Control with Lasers

May 12, 2022

Episode 12: Mind Control with Lasers

Mind Control with Lasers

Welcome back to Neurotech Pub! This episode is part two of a two part series on optical methods for recording and stimulating neural activity. Last time we talked about optical recording methods, but in this episode we focus on optical stimulation methods. 

Guests on this episode are Ian Oldenburg, PhD, Assistant Professor at Rutgers University, Adam Packer, PhD Professor at University of Oxford, and Matt Kaufman, PhD, Assistant Professor at University of Chicago. 

Our guests come from very different backgrounds, but in the interdisciplinary spirit of this series, do a fantastic job explaining the principles of optical control of neural circuits from diverse perspectives.

Cheers!

Show Notes

00:00 | Intro

1:37 | Aspirational Papers

1:56 | Packer Lab 

2:10 | What is the claustrum?

2:30 | Ian's paper (but only part of it!)

3:02 | Two-Photon Bidirectional Control and Imaging In Vivo

3:29 | Inferring Spikes from Calcium Imaging

5:45 | Neuropixels are now in humans

7:12 | Paper by Pachitariu et al 

7:55 | Ian Oldenburg

10:02 | Kaufman Lab

11:21 | Cortical activity in the null space: permitting preparation without movement

12:08 | Motor cortical dynamics shaped by multiple distinct subspaces during naturalistic behavior

12:33 | Tickling Cells with Light

14:41 | Light-activated ion channels for remote control of neuronal firing

14:50 | Remote Control of Behavior through Genetically Targeted Photostimulation of Neurons

15:20 | Millisecond-timescale, genetically targeted optical control of neural activity

16:03 | Red-shifted Opsins

16:52 | eNpHR: a Natronomonas halorhodopsin enhanced for optogenetic applications

17:26 | Genetically Targeted Optical Control of an Endogenous G Protein-Coupled Receptor

18:16 | Neural Dust

18:41 | Wireless magnetothermal deep brain stimulation

19:05 | Neural Stimulation Through Ultrasound

19:20 | Methods and Modalities: Sculpting Light

21:35 | Recent advances in patterned photostimulation for optogenetics

22:50 | Two-photon microscopy is now over 30 years old (Denk 1990)

25:22 | Optical Recording State of the Art

27:06 | Challenges of Deep Tissue 2-Photon Imaging

28:21 | Deisseroth Lab

28:29 | Temporal Precision of Optical Stimulation

29:09 | Simultaneous all-optical manipulation and recording 

30:40 | Targeted Ablation in Somatosensory Cortex 

33:29 | Commercially Available Fast Opsins

34:41 | Recent paper from Deisseroth Lab

41:17 | Cortical layer–specific critical dynamics triggering perception

42:21 | The Utah Array from Blackrock Neurotech

44:52 | Principles of Corticocortical Communication

50:43 | The Cost of Cortical Computation

51:27 | Behaviour-dependent recruitment of long-range projection neurons in somatosensory cortex (2013) | Spatiotemporal convergence and divergence in the rat S1 "barrel" cortex (1987) | Diverse tuning underlies sparse activity in layer 2/3 vibrissal cortex of awake mice (2019) 

52:56 | Gollisch and Meister 2008

53:22 | Spike Timing-Dependent Plasticity (STDP)

1:05:09 | Neurotech Pub Episode 11 -  Let There Be Light

1:05:20 | Forecasting the Future

1:05:41 | Temporally precise single-cell-resolution optogenetics

1:06:16 | Large Scale Ca++ Recordings from Vaziri Lab

1:07:11 | Cohen Lab

1:07:19 | All Optical Electrophysiology 

1:14:19 | Emiliani et al 2015

1:16:33 | All-Optical Interrogation of Neural Circuits

1:16:53 | Mice Strains @ Jackson Lab

1:17:00 | The Allen Institute

1:20:39 | Neuroscience and Engineering Collaborations

1:18:39 | Nicolas Pegard

1:18:47 | Adesnik Lab

1:24:41 | Shenoy, Sahani, and Churchland 2013

1:24:52 | Dimensionality reduction for large-scale neural recordings

1:25:17 | Matlab: Understanding Kalman Filters

1:25:58 | Two-photon excitation microscopy

1:26:37 | Emiliani Lab Holography course

1:26:57 | Optics by Eugene Hecht

1:28:05 | Intro to Optics Course

1:29:41 | What the Heck Is a Claustrum?

1:33:53 | Cortical activity in the null space: permitting preparation without movement

1:34:33 | Neural Manifolds and Learning

1:35:19 | Locked-in Syndrome

1:36:58 | Sabatini Lab

1:37:07 | Probing and regulating dysfunctional circuits using DBS

1:39:36 | Sliman Bensmaia | Nicho Hatsopoulos

1:39:43 | The science and engineering behind sensitized brain-controlled bionic hands

1:41:20 | Michael Long's singing rodents

1:42:12 | Engram

1:43:06 | Chang Lab

1:43:19 | Tim Gardner | Michale Fee

Want more? 

Follow Paradromics & Neurotech Pub on Twitter  

Follow Matt A, Ian, Adam, & Matt K on Twitter

Listen and Subscribe

<iframe style="border-radius:12px" src="https://open.spotify.com/embed/episode/6M3E7hXIE2TLmX9MPCspYf?utm_source=generator" width="100%" height="232" frameBorder="0" allowfullscreen="" allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture"></iframe>

Read The Transcript

Matt Angle: 

Welcome back to Neurotech Pub. This episode is part two of a series about optical methods for interfacing with the brain. In our last episode we focused on optical methods for recording neural activity. In this episode we’re going to talk about optical methods for stimulating activity. 

Our guests today are Ian Oldenburg, an Assistant Professor at Rutgers University, Adam Packer, a Professor at University of Oxford, and Matt Kaufman, an Assistant Professor at the University of Chicago. In this episode we nerd out a lot about the science of optical control of neural interfaces. We talk about things ranging from design of opsins, to nonlinear optics, to control theory. But I think, again, like the last episode, one of the most interesting things we talk about is how do you collaborate with people who have different skills? And especially when you’re early on in your career and can’t do everything, how do you pair up with people to accomplish more together than you could have done separately? 

Another thing that amazed me about this podcast is that Ian and I were roommates freshman year at Carnegie Mellon, and none of us said anything during this podcast that needed to be cut out during editing. 

Be sure to visit paradromics.com/neurotechpub for detailed show notes and references from our discussion. And if you’re listening on Spotify or Apple podcasts give us a follow, so you never miss the latest episode. Cheers!

Matt Angle:

Maybe to kick things off, can everyone go around and introduce yourself, and say where you're from, kind of what you're doing, and then just as a little bit of an icebreaker, tell us about one scientific paper that you wish that you'd written.

Adam Packer: 

Just to get clear, is that an aspirational paper, or is this a paper that we actually did the work and just never wrote because both exist.

Matt Angle:

I think you can answer it either way, because I was actually thinking, you read someone else wrote a paper and you say, "oh shit, that is beautiful. Oh, I wish I had done that work". But, you can answer it any way you want.

Adam Packer:

Can we do two? I've got two.

Matt Angle:

Absolutely. Adam, you want to start?

Adam Packer:

Sure. So my first one is Ian's paper.

Matt Angle:

But you have to start with your introduction.

Adam Packer:

Oh right.

Matt Angle:

We always jump right into the science.

Adam Packer:

You got me excited. Right. My name is Adam packer, I'm at the University of Oxford, and we do two things in my lab. One of them is we use all optical interrogation, which I think we're going to talk a lot about today to understand neural circuits. And another thing we do is, we are trying to understand the neural activity in, and function of the colostrum, which is this funky brain area that there's not a lot known about. So that's who I am and what I do, and the papers that I wish I had written; So there's two of them. One is part of Ian's paper actually, which is using two photon optogenetic inhibition, with a particular channel. And I remember when this channel came, the GTACRs, I'm talking about, when they came out, I wrote to a bunch of people I was working on All- Optical Interrogation with and I said, "guys, this is going to be the one that works."

All we need to do is characterize the wavelength, dependence, power dependence, how strong it is, how much of it we need, that neurons can tolerate it. It's clearly going to be a game changer for two-photon oxygen inhibition. Then, 2017, Ian's paper came out, another great paper from Tommaso Fellin's lab, characterizing these in exactly this way. And it was like, you read it and you're just like, this is a great paper. I wish we had done this, and you know what actually, I think in many respects they did it better than we would've done it. So they did a lot more controls. It was really nice to see. The second paper I wish I had written, is much more, sort of technical kind of characterization. So I'm talking about there's these recent papers from the Allen Institute where they characterize how well you can infer spikes from calcium imaging data.

And I had this on my list to do for a long time, because everybody knows there are different conditions under which it works differentially well. It's kind of like spike sorting in electrophysiology. We all know it's problematic, but the key is to quantify exactly when, and how, and why, right? And so, I kind of wish I had done this one because I felt like they gave us a lot of really good information. They gave us a lot of answers. But one thing that we still don't know is, if you can take a particular neuron in you're recording and know how well you're getting spikes from that particular neuron. And I'm sure they did a deep dive into there, and I can't wait to meet them at a conference when this pandemic is over and ask them what didn't end up in the paper. Cause I think there's a lot still left to do, but yeah, those are the two papers I wish I had written.

Matt Angle:

And actually, I saw that paper also and was very impressed with it. Could you unpack a little bit more for the audience? Just because some people are coming in without a lot of experience in calcium imaging and–

Adam Packer:

Sure.

Matt Angle:

And what would be the ramifications of having this sort of worse than expected coupling between spikes, and calcium signal? In terms of understanding the brain, or in terms of building decoders for BCI?

Adam Packer:

Sure. So I think one thing we'd like to do in neuroscience is have a record of the electrical activity in populations of neurons. We'd like to know for tens, hundreds, maybe even thousands of neurons, how many spikes were in each one of those neurons. And there's a lot of different ways to get that sort of information. The gold standard for a long time was electrophysiology, so you could basically stick a wire near a cell and record those spikes. And in electrophysiology you have one major advantage, which is you have really good timing, so you know exactly when those spikes occurred. There is a disadvantage- you don't know exactly where the neurons are that those spikes came from. So with electrophysiology, you have a spike sorting problem, in terms of figuring out which spikes came from which neurons, because you're literally blind to it.

You have a wire in the brain and you're recording a signal. Now of course, you can stick lots and lots of wires in the brain. You can get a raise, you can get neuro pixels, shafts with arrays of reporting sites, but you still fundamentally have this problem. So there's obviously been an optical revolution in the last 10 or 20 years, where we can use calcium imaging as a basically indirect readout of the electrical activity. So when a spike happens in a neuron, a lot of calcium ions rush into the cell, and because it floating around the cytoplasm, it's a big signal. It's a big change in concentration. We have now very good indicators of that calcium. And therefore, we can record with imaging methods, the calcium in a neuron as an indirect readout of that electrical activity.

So now you can imagine, you're literally making a movie of a field of view of many, many neurons, tens, hundreds of neurons, in order to get a record of that spiking activity in that population of neurons. And now you have a different problem. You actually have two problems. One, you don't really have good timing precision, because calcium is a bit slow and the indicators are a bit slow. And this second problem is there's not a direct relationship. Well, there is a direct relationship between the calcium and the spikes, but it depends on the amount of indicator in the cell, depends on the biophysics of the cell, and it depends on how well you're sampling that cell. So if you want to image more and more neurons, you've got to go over a larger, and larger field of view and you get less good information about each cell.

That's what they quantified pretty well in this Allen Institute paper. They quantified as you zoom out and you get more neurons, what effect does that have on how well you can read out the spikes, and to get, to try to wrap up here, because I know that was a bit of an extended answer. But we want to decode the activity of neuro circuits. Doesn't matter if you're doing electrophysiology or calcium imaging, you want to know precisely, how many spikes occurred in all those neurons. So you can imagine, as you want to get more and more of the population, you want to understand more and more of the neural circuit. You might have to pay a cost of not understanding each neuron as well. And that's what they quantified well, is that trade off between more information, but maybe at a bit less quality.

Matt Angle:

That was awesome. Thank you.

Ian Oldenburg:

Do we want to have the debate of why do we care about spikes versus...

Matt Angle:

We should have that discussion, but I think first, Ian, you should introduce yourself and tell us about a paper that you wish that you wrote.

Ian Oldenburg:

Sure. So I'm Ian, Ian Oldenberg, I'm a postdoc in Adesnik lab, and I've been working on developing multi-photon optogenetic tools throughout my postdoc. I helped out, and participated in making a new optical strategy to get light to parts of the brain better than what existed before. And as Adam mentioned, I've worked on developing new opsins. And Lately I've been developing new strategies to be able to write better, more precise patterns into the brain, so that we can really start to tickle the brain in more realistic ways. I was thinking about what paper should I call out as sort of, the most interesting, and there are a lot. Certainly, I'm definitely a big fan of Adam's papers, where he's really pioneering the field of multi- photon optogenetics.

He did a lot of the most solid work to actually show that it's possible, show that it's doable, and use it for real ways. And then also, I kind of wanted to call out some of Matt's papers too, because I think one of the directions that I'm most interested in, is how we can start using the observed activity and writing it back in interesting ways. And I think Matt Kaufman here has done some of the best work at really analyzing what patterns of activity matter, what's meaningful, and what might be not, and might be causally evoking changes in activity and not, and that's a sort of direction that I certainly want the field to move in, is to look at codes very precisely, and write them back in interesting ways. Now I say I'm envious of these papers, because I don't think I could have done either of them. I don't have the technical computational background to get anywhere close to what Matt did, and when Adam was writing his multi-photon optogenetics papers, it wasn't even on my radar. So I'm very envious of both of them.

Matt Kaufman:

Wow. Thanks Ian.

Matt Angle:

That was very diplomatic of you, Ian.

Ian Oldenburg:

Well, you know. All true too.

Matt Kaufman:

All right, I guess I'm up. I'm Matt Kaufman, I'm an assistant professor at the University of Chicago and I'm really interested in how you get populations of neurons to work together in order to compute stuff. And by stuff, I mean the signals, the control signals that animals need in order to behave. We work with mice, we do a lot of calcium imaging. Hopefully eventually, we'll expand into these wonderful techniques that folks like Ian have pioneered. So we would be able to write activity back in. But we're interested in particular, in simple decision making, and even more in motor control, because these are the places where the brain really has to generate a complex signal. And we're fundamentally still trying to figure out even some real basics of how that works.

So my approach has been to take these somewhat abstract mathematical approaches, where we're trying to figure out the rules that govern the pattern generator, in terms of dynamical systems. And the reason I'm in mouse now is so that hopefully we can tie this to the biology because with calcium imaging, as Adam pointed out, you can see the neurons. You can tag them so you can see their cell types. You can tag them so you can see where they project. And the paper I think Ian was probably referring to, has to do with understanding how information flows between areas. And this is really critical, because we know that it's not just neurons acting as a network within an area in order to compute stuff, it's also a bunch of areas working together, and information gets passed back and forth.

Some of its feet forward, some of its feedback, somehow you end up with a sensory guided, intelligent, pattern generation in order to produce exactly the control signals that the muscle need in order to move the animal around. So, with that in mind, I think the paper I wish I'd written was one that was deeply related to work I did in 2014, but took it a really big step further. And this is work by actually, inconveniently yet another Matt, Matt Parrick, where he was looking at how sensory feedback in particular, in the context of errors comes into the motor system, and pushes on it to alter those dynamics, alter what's going to happen. So it's not an autonomous system so that the animal can make the correct movement, to correct that error.

Matt Angle:

Thank you. Today we're going to be talking about optical control of neural activity. And I wanted, if someone could start us off at the really basic level, and just summarize for us, what is an action potential, and what causes it to occur in the neuron? Just the kind of 30 to 45 second, what is an action potential?

Adam Packer:

I already went on a tirade, so I'm going to bow out here. I can avoid this one, right?

Matt Kaufman:

Sure, I'll take it. So, the fundamental unit of communication of a neuron is this discreet event called an action potential. A neuron gets a bunch of inputs. Some of those inputs excite the neuron, some of those inputs inhibit the neuron, and when those inputs end the up passing some threshold, and that threshold is going to be a little bit unique to each neuron, how many inputs it needs to integrate. That neuron says, okay, that's my jam. I'm going to, I'm going to fire an action potential. And that action potential is basically the same every time. It just says, "Hey, I got triggered". And that neuron, because it connects to hundreds, or thousands, or tens of thousands of other neurons, it informs all of those neurons that the inputs it likes were what it just saw.

Matt Angle:

By the way, don't be afraid to be a little bit more technical in terms of membrane potential, and this will be a scientific audience, but I always like to start at a basic level and kind of work up to the maximum level of detail. So the next thing I was going to ask, is simply how do researchers artificially induce this process, the firing of an action potential using light? What's the transduction mechanism to cause a neuron to fire an action potential? What are the modalities that you can use?

Ian Oldenburg:

So, there are several different ways to get a cell to fire an action potential. The oldest ways of causing cells to spike artificially, are to use an electrical current, where you stick a couple of wires, originally, or some fancy shaped stimulating electrode to emphatically, so electrically cause a cell to fire. You can do fancier methods, like create an internal connection to a cell, to cause it to spike, change the electrical situation inside. But your question's really about how to use light to do it. So the first spiking light-based approaches, at least the most popular ones are channelrhodopsin. There were some purely light, no opsin mediated strategies that were employed a little bit before the opsin based one.

Matt Angle:

This is like Dirk Trauner and Ehud Isacoff..

Ian Oldenburg:

And Miesenbock, and a few others. We're trying purely light based. And, the light based ones are effectively, you focus a spot of light on a part of a cell, and you rupture the membrane just a little bit, so that ions from your extracellular media flow in and the cell thinks that it's a synaptic input as Matt just described, and causes it to spike. Now, the better approaches, that have really replaced it, are these optogenetic ones. Starting first with channelrhodopsin, which was a widely popular excitatory opsin. It's a cation opsin, so that means it lets positively charged ions into the cell. And when you shine, in the case of channelrhodopin, blue light onto those cells, they'll depolarize, much like in the case of a synapse, and that will hit the action potential threshold, and cause the cell to spike.

Since then, many more different opsins of different varieties have been created. They've been sped up, and slowed down, they've been redshifted, and blueshifted. And the class of opsin that I'm most interested in, are these redshifted opsins that have also been tested to see how well they work in the two photon regime. So we'll probably talk about what the mechanisms behind two-photon action are, but it's an optical strategy that allows you to use a much higher wavelength, a much redder wavelength to mimic a lower wavelength. So we can mimic that blue light with an infrared light, and get many better spatial response functions, spacial properties, because of it. And this allows you to activate the cells in much the same way. You set up the same molecular actuator, the opsin, but you can excite it with different types of light, and then different strategies. People have also made opsins that are inhibit made, or found.

I should say that many of the different opsins that exist, are just ones that are found in nature. There's a separate class of people who mutate either new found opsins or existing opsins, to make better for different properties, but many are found. And so some are these anion opsins, which actually suppress a cell, make it harder for that cell to fire. And then, even there are opsins that change other properties. There are these opsins that are connected to G protein-coupled receptors, so you can change the internal signaling state of cells. And you can do a lot of different things by pairing your particular opsin of interest with its different G protein-coupled receptors, or different other properties to be able to..

Matt Angle:

So these are all genetically engineered membrane proteins?

Ian Oldenburg:

Yeah. All of the opsins I'm describing are genetically engineered membrane proteins. I don't know of a simply added opsin that's been used widely. I'm familiar with the old, maybe someone else knows.

Adam Packer

Do you mean simply added?

Ian Oldenburg:

Not a protein, not a non-genetically modified or non-genetically originated protein.

There are the neural dust thing that Jose Carmena does, which you can cause a, I don't remember exactly how it works, but I think you can cause a small particle to vibrate very quickly with certain wavelengths. I think they're more in the microwave or the ultrasound wavelength. And so that would be a non- protein based way of getting something to happen. And then, a few people have worked on using heat based systems to get remote activation. So you can engineer a trip channel, which is a heat sensitive channel, and then it'll respond selectively to hot things, when you can heat up a cell with the light, but they certainly are not as prevalent as the protein based optogenetics.

Matt Kaufman:

There is this new ultrasound trick, right? Where you can focus ultrasound to transiently disrupt the cell membrane and allow ions to pass through it. And that can, that can cause excitation. That's the only technique I can think of that isn't fundamentally an engineered protein.

Matt Angle:

And we talked a little bit about multi photon approaches, but in general, what are the different ways of getting light to these actuators? Experimentally, how can you get light where it needs to go? How can you, I think the phrase sometimes that the end uses, is sculpt the light?

Adam Packer:

I can take that one if you want. I think it might be my turn, possibly. So there's a number of different ways of doing this. I think to start, most simply you can take literally a flashlight, or an LED, or a light bulb, or anything that emits light and shine it at the brain. And oftentimes, as the name optogenetics implies, you've put your opsins into a particular genetically defined set of cells. You maybe have excitatory cells, or inhibitory cells, certain class of in inhibitory cells, or cells defined based on their projection targets, something like that. And then you only have opsin in those cells. So you just flood the tissue with light. You literally put photons everywhere by shining the equivalent of a flashlight or a torch at the tissue. And you get photons, hitting all the cells, and only those cells with the opsins in them will be affected, because those options will absorb light, change their confirmation, allow ions to flow across the cell membrane, and cause the excitation or inhibition that Ian mentioned, that ultimately drives the action potentials that Matt Kaufman here, so elegantly described. So that's the simplest possible way.

Another way that you can do a little bit more of a targeted spatial approach, is you can use a fiber. You can use literally a fiber optic to guide the light to a particular part of the brain. So you can, for example, insert a fiber, implant it surgically into the brain, so that you can direct your light in a little bit more of a, I want to avoid saying focused way, but that's in a way literally what you're doing, a more targeted way shall we say, slightly more sculpted, to get the light where you want it.

So you may have opsin in a lot of the tissue, but you only want to activate one part of it at a time. That's a very common way of doing it. Then another thing you might want to do, is you might want to pattern that photo stimulation. You might want to say, target A and B but not C, at one time. And then in another experimental trial where you're testing something different about the neuro circuit you're looking at, you might want to do A and C, but not B. And so there you can use different ways of patterning the light, using effectively spatial light modulators. These as the name implies, modulate the light in space. And there's different classes of these. Some of them are sort of little micro mirror devices. They're literally what's in your projector that you use to project a slideshow on the wall or a movie.

And they, literally redirect the light with an array of tiny mirrors to different parts. And people have used these very successfully for stimulating different parts of the brain spatially, at different times. The other thing you can do is what Ian was alluding to earlier, is you can use two-photon excitation. Now all of the methods I talked about so far use one-photon excitation. Literally an opsin absorbs one photon, and it changes its conformational state and allows ions to flow across the membrane. However, you can also do two-photon excitation, whereas the name implies, a given molecule absorbs two photons at the same time, having the same effect as having absorbed one photon, because those two photons each have half the energy. So that's what Ian meant when he said you sort of mimic some light with another light. So you can mimic blue light with IR light, which has half the energy. And then you need two photons to get to the same excited state for that molecule.

The advantages of two-photon excitation are twofold. One, it's less prone to optical scattering. When you use longer wavelengths of light, you can penetrate that light deeper into the tissue. And this is because of the properties of the scattering of light. And you can think about it, if you try to shine light through your fingertip, if you have a blue light versus a red light, you see more of the red light getting through the other side, because it scatters less through the tissue. So we take advantage of this to drive light deep into tissue, because often we want to interrogate these neural circuits as they're working. We want to probe the brain while it's still working in the animal. So we, we have the whole brain there.

We have a lot of tissue to get through, and we might want to penetrate our light deep into the tissue. And two-photon excitation allows us to be less prone to scattering. And it also generates something called optical sectioning. It allows us to direct the light in such a way that we only get excitation at the focal volume, and not outside of it. Unlike one photon excitation, where you don't have this restriction of needing to absorb two photons simultaneously, so you can get excitation anywhere that the light is. With two photon excitation, you only get that at the focus where you've got such a high intensity that individual molecules can absorb two photons effectively, instantaneously.

And so this is a way that we can now target individual neurons in awake, behaving animals. We can use two-photon excitation opsins to target those cells. And, I imagine we're going to come to this, I'll just maybe leave this as a teaser, you can combine these things. You can take two-photon excitation and couple it with a technique to pattern the light, using for example, the spatial light modulator I mentioned earlier, to target multiple neurons that you've select within the tissue. And that's why I think light is a particularly powerful way of controlling neural tissue. Because now we have this added selectivity of targeting things, both in space and in time.

Matt Angle:

That's perfect. That leads into my next question for the group, which is, where are we with respect to optical control of ensembles of neurons? How many neurons can we access simultaneously? What's the temporal and spatial precision? What are some of the trade offs? Give us a sense of where the technology is right now. If you have an open craniotomy, access to a few square centimeters of brain, or in the case of a mouse access to the whole brain, how many neurons can you activate with what temporal precision?

Ian Oldenburg:

Well, one thing, we've talked a lot about the writing in and I can definitely address what the limits are there, but we should also not forget that half of the game is just reading out, and getting that information out from cells. Part of your answer here is how many cells can you reliably read from in an optical strategy, and what are the limits on this? And I think with both reading and writing, one of your major, and hardest things is actually not the breadth of like how wide of a sample can you get though? There are limitations there, but it's how deep can you go? And so infrared light as.

And so, infrared light, as Adam just said, can travel much farther than blue light can, but because we really want to keep that spot of light that's going into the brain very small so that you have good resolution, it's still pretty limited. And I think most people go less than a millimeter into the brain with two-photon light. If you want to take... there are ways around this to get deeper-

Matt Angle:

Let's say that our target depth is 750 microns. How many neurons? What kind of resolution?

Ian Oldenburg:

Not great.

Matt Angle:

We're not going to hold you to this. We just want get... I think the audience-

Ian Oldenburg:

Can I say 250 microns deep?

Adam Packer:

Yeah. I think it was a little more shallow. So, I think one good thing here to think about is to split it up into a few categories, right?

Matt Angle:

Sure.

Adam Packer:

So, as with any sort of optical technology, people always ask, "We want to go wider, faster, deeper, and stronger." And then eventually, when they get those things, they're like, "Wait a second. I also want to go in three dimensions. I want to go volumetrically. And I want to do real time," which I'm sure is something we're going to get to in this call. But I think to focus on the first four is easier. Sort of how wide can you go? With which speed? With how many neurons?

Matt Angle:

Or another way would be what are some existence proofs that we have that kind of explore the different domains? Where do we have an existence proof that we can point to and say, "That was a great example of wide," or, "That was a great example of deep," or, "That was a lot of neurons."

Adam Packer:

So, maybe I'll start with, if Ian doesn't mind, I'll start with sort of about how many neurons we can get, with what spatial precision, and then the Deisseroth Lab group who has blown out how wide you can go. So, they leapfrogged us with some beautiful technology that was published a couple years ago. And then Ian should jump in here with temporal precision because I think his paper is the one that really slam dunks that. And these are all in sort of different quadrants. You can't do all of these things simultaneously. You kind of have to choose yours. So, maybe if I sort of work through the history a little bit and then hand it over to Ian, you'll get a sense of what the numbers are.

So, what we started with was we wanted to be able to activate about 10 or 20 neurons simultaneously that we chose. So, we used two-photon excitation to target an individual neuron. We combined that with the spatial light modulator to hit more than one neuron at a time. And we did that in our original paper now from end of 2014, about 10 or 20 neurons.

And it became immediately apparent it's not going to be enough. It was like 10 or 20 neurons is not how even a mouse brain works. It's going to take more neurons than that. And to some extent, it's almost as easy as buying a bigger laser because effectively, what limits how many neurons you can hit is how much laser power you have to divide across those neurons. To some extent, if you buy a bigger laser, you can just split across more neurons, assuming you don't run out of neurons in the space that you can hit, which I'll talk about in a moment. Basically, if you don't run out of targets.

Now, the real problem is we can buy bigger lasers. I mean, there exists lasers for micromachining. We're actually using some in of the lab now that are also used to cut iPhone glass. I mean, that's how powerful they are. So, this is way more powerful than we need for biology. We're just splitting it up amongst lots and lots of targets. So, you run into actually a limit of photo damaging the brain.

So, our numbers, these days, with these big lasers, we've tested the photo damage limits. We know how much we can put in. Basically, we figured that out, by the way, by turning it beyond what's okay and figuring out where the damage limit is and then dialing it back a bit. So, we know where that limit is. We can hit 100 neurons or so. And we can do that every-

Matt Angle:

So, you've developed a very advanced lesioning technology is what you're saying.

Adam Packer:

Oh yeah. If you want, you can turn it up to 11 and do ablation. And people have done that, actually. In two-photon excitation, people have done this in a very... Simon Peron, for example, recently had a beautiful paper, ablating a particular ensemble of neurons that code for something in somatosensory cortex and showing the effect that had. So, this is actually also a useful way of doing this. If you want to know how something works, take a part of it away and see what happens. There's always issues with interpreting those experiments, but they did a great job with all the controls in that paper.

But getting back to your question. How many neurons can you hit? With what precision? Over what area? At what depth?

So, we can hit about 100 at once. And we can do that every 10 or 100 milliseconds or so and then hop to a new and different 100 neurons. So, if you just do a very back-of-the-envelope calculation, if it's 100 neurons every 10 milliseconds, you can say, "Well, we can do that 100 times or very roughly in a second. We can now do 10,000 neurons in a second."

But to be honest with you, that's beyond the limits of what we can read. So, getting back to what Ian mention earlier, we actually don't have 10,000 targets to hit in a second that we need to photoactivate. When you start activating that much of the tissue, you're getting back to almost a one-photon experiment where you're just bathing the tissue in light and hitting it all at once. You might as well just be sticking a wire in and turning the electrical current on and lighting up the whole area. Now, some people would say, "Okay, well, it's the temporal precision that matters. If you can hit these 100 neurons in this particular sequence, maybe that's how the brain works."

I think now, instead of going to wider, I want to pass it back to Ian because he can tell us more about temporal precision in terms of when exactly can you get the spike? What sort of precision do you have with that? Because that's where his technology, I think, really shines.

Ian Oldenburg:

Yeah. So, to get temporal precision, it... I mean, a big piece is about how you construct your entire multiphoton optogenetic system. So, there are a lot of things that will go into how do you activate a cell? And if your goal is just to get a cell to be activated fastest, you first need a very fast opsin, one that's going to respond to light very quickly. And then you want one of these strategies that gets all of your light to the cell at the same time.

Matt Angle:

When we say, "Very fast," can you talk a little... what are the time constants of the different opsins, just to give us a sense?

Ian Oldenburg:

You want it to turn on in a millisecond or less. You want the jitter from an action, from an evoked action potential, to be a millisecond or less. And-

Adam Packer:

And this is because you think action potential precision in the brain that matters is at the one millisecond level. Right? Which could be true. 

Matt Angle:

Let's punt on that though because I want to get to Matt next. And so, I want to kind of get a quick answer from Ian.

Ian Oldenburg:

Yeah. So, yeah, I think a millisecond is about the resolution that I care about and therefore we engineer opsins to be fast enough to hit that level. So, from basic cell properties, just injecting current into a cell, you can inject and try to get it to spike. You can inject a little bit of current over a long period of time, 10, 20, 50, 100 milliseconds, and it doesn't take much current to activate that cell, but the consequence is you don't know exactly when that cell is going to fire. It's going to depend on all the other inputs, as Matt mentioned, the particular preference of that cell of what point it needs to go to fire.

In contrast, you could alternatively pack all of your depolarization, all of your current, into a single pulse that's very brief, say, a couple milliseconds long. Then you know that the action potential is going to be a lot closer in time, it's in a lot better known spot, but now you need much, much more current. So, the analogy in opsin space is that if you have a very fast and very strong opsin, you can activate cells much more quickly.

Now, this is a harder ask. So, I think the most powerful opsin that I've seen is this one that the Deisseroth Lab just published that takes... You look at it cross-eyed and it'll spike the cell. It's super potent. But it's also very slow. So, you can use very, very little light cause these cells to spike, but you have much worse control over when they're going to fire. And maybe you can combat that by having a short pulse of light, but you'll still have this inability to make them fire very fast, so stringing many pulses back to back to back to back. So, if you care about writing fast and precise patterns of action potentials, you really want both a fast and strong opsin.

Now, this is still only half the game. Getting an opsin that responds to your light correctly is only part of it. You also need to get all of your light to the cell as quickly as possible. So, I like to describe the multiphoton optogenetic strategies into two worlds. There's the one where you take the smallest possible dot of light and you spiral it around on a cell. This has the best possible optical resolution. So, you have the least amount of crosstalk and mistakes, which we should talk in depth about, but the penalty of this is that it takes a non-zero amount of time to actually spiral that spot around a cell.

In contrast, you could take one of these other approaches where you take a larger spot of light and you just flash the cell all at once. Now, because light propagates, it moves forward along its axis, imagine shining a flashlight at a piece of paper, there's always going to be a little bit of light in front of that piece of paper and behind. And there's no combination of mirrors that you can use that will actually totally eliminate the light in front of and behind your cell.

So, we take a lot of different tricks to try to minimize the ability of that light to activate cells, but anytime you're going for a larger spot you're going to get more off-target activation so it comes at penalties. But by having a larger spot, you totally activate all of your cell as quickly as possible. Combine that with a very fast opsin, and you can get that cell to spike very quickly.

And so, then the next question is, like, okay, then it's a sort of easy thing then to write in whatever pattern you want. If it takes a millisecond to cause a cell to fire, you can control your laser at, what is it, megahertz rates. I don't really remember how fast our EOM goes. It's way faster than you want there. So, you can write in any pattern into one cell that you want.

But we don't care about one cell, or we don't just care about one cell; we care about many cells. So, if you want to now change up who you're talking to at any given moment, you need to change the pattern on your spatial light modulator, on your SLM, to be able to change targets. And this is something that has been the focus of a number of different groups. The Deisseroth Lab, they approached this by using two SLMs that they then shuttered quickly between. The Emiliani group has worked on this strategy where you take one SLM, but you put different patterns on that SLM that you can switch between. And then there's also been just a really convenient increase in the capabilities of SLM. So, the SLM that I like the best natively operates at about 300 hertz. So, that means you can interweave different patterns of cells quite quickly. You can change up who you're targeting at 300 hertz. So, you can cut that to two different patterns at 150 hertz, or however many patterns you want, and at subsequent slowdowns.

Matt Angle:

And so, with this approach, roughly how many neurons can you access? What percentage of them at a time? What spatial resolution? Can you give us a couple? I know that you trade off, but can you give us a couple examples just to build a kind of mental picture?

Ian Oldenburg:

Yeah. So, the downside of the fast writing approach is that it's more power-intensive. So, I think the upper bound of how many... so, the Adesnik Lab just published a new opsin, which has raised the ceiling in our hands. I think they show 200 cells at a time. Maybe it's only 100. Something in that ballpark in this fast approach. But as it's only taking a couple milliseconds to fire this thing, this cell, the total number of spikes per second, consider the update rate of 300 hertz times 200 cells or 100 cells, you're talking in the tens of thousands of spikes per second divvied up however you want.

Ian Oldenburg:

But then, of course, as Adam says, we have the same problems of you don't really have tens of thousands of cells that are in your volume that you care about. And you probably can't read out a single spike in calcium imaging as-

Adam Packer:

So, I want to throw some numbers out here, because I really enjoy this. Can we put some hard, fast numbers on this? So, in our original paper, we could do, across half a millimeter by half a millimeter of cortical tissue, roughly, we could do 20 neurons at once with around 10 millisecond precision, so the jitter of when the spike occurs. And we could do that every something like 20 milliseconds.

Now, we then bought more powerful lasers. We could do 100 neurons. But in terms of the precision and the speed, it was pretty much the same. We just bought more neurons with a bigger laser.

I think what Ian, correct me here if I get the numbers from your 2017 paper inaccurately, but you guys could do also 50 neurons or more, dump a lot of laser power on. You can't do it that often because things might heat up, but you can do it every so often if you know exactly which neurons you want to hit, with much better, closer-to-millisecond-level precision. So, the technology keeps leapfrogging.

And also, just to get back to wider, the area over which we can do it, the Deisseroth Lab paper, 2019, they upped it to a millimeter by a millimeter field view with millisecond precision in terms of writing the activity and also the ability to switch between different patterns with millisecond precision.

So, it's ratcheting up with time. We went from 10 neurons to 100 neurons; 10 milliseconds down to a millisecond. All of us are kind of pushing hard on the laser manufacturers, the opsins and the SLMs to keep increasing this in order of magnitude until we figure out what level of precision we need to understand how the brain works.

Matt Angle:

I'd like to revisit some physical limits in a little bit, but first I want to switch over to a different thread and it's a place where I'd like to bring Matt in, which is in most electrophysiology methods, whether you're stimulating or recording, you're often looking at a sparse subset of cells compared to the entire population.

Now, in recording, if I think especially about work that's been done with the Utah Array and some of your work in particular, we're taking advantage of the fact that the activity and the kind of coding space is much lower-dimensional than the number of neurons in the network and therefore, even a small number of neurons, if you have access to record from them, you can make an inference about the state of the system. But when it comes to controlling the system, I think it's not obvious that access to only a small number of nodes will allow you to drive the system. And I'm wondering if you could comment on, is there good theory there? Are there good proof points? How do we know how many neurons we need access to actually drive the system into the modes that we're looking for?

Matt Kaufman:

This is a great question. So, first of all, it depends on where in the pipeline you are. So, if you're way over on the sensory side versus way over on the motor side versus somewhere in the middle, you're going to see very different things.

So, on the sensory side, your brain is constructed such that you can detect incredibly faint stimuli. So, we've known for decades and decades and decades that if you actually take the sensory periphery, if you look at the retina, if you wait in a dark room for an hour or two and you fully dark adapt, your retina can detect somewhere between three and five photons hitting it at about the same time. You're phenomenally sensitive. These aren't engineered proteins. These are your normal photo receptors.

Similarly, you take a mouse which lives largely by its whiskers and its nose. So, you take a mouse and you do tiny, tiny stimulations of the whiskers. It's sensitive to activation of just a handful of neurons.

Now, if you go in and you do the optogenetic manipulation and you activate a handful of cells in the primary sensory cortex for any of these different sensory modalities, whether it's audition or vision or somatosensation, you can activate just a handful of neurons and the animal can tell you that, yeah, they felt that. And you can do this with these sorts of targeted stimulations. You can do this using very low power such that you can check that you're activating only a handful of neurons. People have done this a variety of ways.

If you activate just a handful of neurons in the sensory areas, the animal can detect it. And there's some evidence now actually that this may be true anywhere in the brain, that if you ask the animal, "Hey, did you feel that?" They can say, "Yes," accurately, that they can detect that only even a small number of neurons were activated.

But that's only half the question. Can you get an animal to intentionally, if you like, amplify these very small inputs? The answer to that seems to be yes. But when you're trying to make the system dance, when you want to make the system do exactly what you want it to do, that's a very different question because now you're trying to drag it away from what it wants to do and make it do what you want it to do.

So, that, we won't really know until these guys manage to get their technology to a point where folks like me can stick it in our lab and test our hypotheses, but this notion of low-dimensional dynamics gives me some hope. So, to unpack what that is a little bit, this idea is basically that the activity of neurons isn't independent. Unsurprisingly, they're all connected in a network. They better be cooperating to do what they do. So, what that means is that you'll observe, when you record from the brain in a bunch of different contexts, you'll observe only a limited number of the possible patterns of activity you get.

So, to oversimplify it, some neurons are going to be correlated. Some neurons are going to be anti-correlated. Some of those individual pairs may be very strongly correlated. Many of them are going to be weak. But once you look over a large number of neurons, you can predict very well what the activity Neuron Number 1,000 is if you understand its relationship to Neurons 1 through 999. So, that's this notion of low-dimensional dynamics, that you can't get any possible pattern. The neurons are, in some sense, locked into a subset, subspace, if we use a mathematical term, of the possible different patterns.

And more than that, there's also not just a low-dimensional system, there's also a dynamical system. So, if you have one set of firing rates, you can predict very well, at least in motor cortex, what the firing rates are going to be 100 milliseconds later because it turns out that these firing rates evolve sensibly over time. That's just to say there are rules that govern how the neural state, as we usually put it, what the rules are that it has to obey over time.

So, if you understand both what the low-dimensional system is doing and you understand the dynamics so that you can work with the system to push the state around in a way that it likes to push the state around, then I think we have a much better hope of writing in patterns that are cooperating with the brain instead of fighting it, and hopefully let us write in sensible patterns instead of just writing in the equivalent of white noise.

Matt Angle:

I'm thinking an analogy. I'm a limited person. So, when I have to think, sometimes I think about a football stadium, if you've ever been to an event where people do the wave. So, you would only need to watch a small number of people to know if the stadium was doing the wave, but I don't know if you've ever been to an event where a couple of drunk people stand up and try to get the wave started, and it dies out with their next door neighbor. That's an example of a case where a very sparse subset lets you know it's happening, understand the low-dimensional dynamics, but you actually need a critical mass of people standing up and waving their arms to get the wave going.

Adam Packer:

Or you need the announcer. You might need an announcer to get on the announcement system and say, "Okay, we're going to do the wave now." And suddenly, it doesn't take that many people to kick it off, right?

Matt Angle:

I guess what I mean is–do we have a sense, like, are there first principles... just looking... we know a little bit about connectivity. If we know a little bit about connectivity in the network, we know about what correlations are during... can we start to bound? Is there a way that, in the same way that I pushed Ian and Adam kind of uncomfortably to put down hard numbers, is there a way we can start to get a sense of what you definitely... I think probably we'd all agree, you don't need to control 99.9% of neurons. And I think we also probably agree that less than 1% is probably not enough to drive the network since they're all kind of weakly-coupled. Or maybe I'm wrong. Maybe actually that's not right. Is there some way we could start to kind of close that uncertainty window?

Matt Kaufman:

Yeah. So, first of all, I won't accept that 1% isn't enough.

Matt Angle:

Okay, okay. Good. That's great.

Matt Kaufman:

So, part of the reason for this is when you do electrophysiology, you stick electrodes in the brain and you hear neurons chattering away and you go, "Hey, man, the brain's a really active place. When I have the animal do something that's related to what this area does, then I sure hear a lot of neurons."

But there had been this really suspicious thing for decades, which is that if you calculate how many neurons you should be able to hear from the end of an electrode, it was much larger than the number of neurons you actually ended up recording, when you did that spike sorting, when you figured out which spike came from which neuron. Which neuro is just you assign them a letter or whatever.

So, we also knew, just from energy calculations, that if all of the neurons were as active as the neurons we recorded electrophysiologically, the brain would boil. There's no possible way that the energy expenditure could have been consistent with that number of spikes. So, we knew that most neurons had to be quiet.

And the first thing that most people see when they start doing calcium imaging, after they get over the initial shock of, "Oh man, I can actually see this thing I've been visualizing the whole time," the sort of first thing you notice is, "Wow, a lot of these neurons aren't doing much." Most of the neurons are really pretty quiet.

Matt Angle:

How sparse is activity in cortex?

Matt Kaufman:

That's a much harder question to answer than, "Will 1% be enough?"

Adam Packer:

I'll give you a number for somatosensory cortex, if you want.

Matt Angle:

Sure.

Adam Packer:

It's about 20% of the neurons are active to a given stimulus that you put in. And it seems to be also that many neurons, we can't exactly find the correct stimulus. Now, we can argue about how would you go about doing this? Do you need to try every possible stimulus? But the bottom line is not that many neurons fire not that many spikes in somatosensory cortex. They all also maybe only fire one or two spikes to the stimulus they do prefer. So, that is quite interesting. It gets at Matt Kaufman's point here. The activity is quite sparse. I mean, somatosensory cortex is particularly sparse, but still, it's sparse.

Matt Kaufman:

And that's when you're driving a sensory area. So, the best estimates I've seen of overall brain activity is that the average neuron is somewhere between 0.1 and 0.01 hertz. In other words, a neuron is firing, on average, a given neuron, not any neuron, is firing on average somewhere between 1 every 10 and 1 every 100 seconds. Most neurons are very quiet most of the time, at least in cortex.

Ian Oldenburg:

And pyramidal cells. We should specify that we're talking about pyramidal cells.

Matt Kaufman:

Yes, thank you.

Matt Angle:

Now, that we're talking about how sparse these spikes are, I think would be a reasonable time to let Ian in with his importance of spike timing question and precision of spike timing.

Ian Oldenburg:

Sure. I mean, is spike timing important? Does anyone know the answer to this? That's the question, right?

Matt Angle:

I think in the retina, it's very clearly important. In early sensory areas, it's very clearly important.

Ian Oldenburg:

It's very clearly encoded, yes. And definitely in auditory cortex, you can see neurons phase-locked to the phase of the sound that they're hearing. We can see very precisely-timed spikes going on in many cases. There's this whole concept and theory of spike timing-dependent plasticity being important for how the brain changes, what it does throughout the life and role of the animal.

But it's also a very hard thing to test. So, we can see that there are hallmarks of timing at the sort of millisecond scale or even better being real and having real physiological changes, but to demonstrate that this is something that's used by the animal is actually a lot harder to see. And I think, the major problem with this is it's a hard dimension to probe. How do you get really precise, added activity on a fine scale? Yeah, sure. You could stick a couple cattle prods in the brain and synchronize them and say, "Okay, this area is going to be synchronized with that."

But if you really want to get in this very nuanced region, okay, this circuit wants cells A and B to be synchronized and critically C to be asynchronous or to be delayed by 10 milliseconds or a hundred milliseconds or a second. This isn't a type of experiment that can be done to a first approximation. So, I don't think there's a lot that's known about how important is it. We just keep seeing these signs that neurons are controlling their timing, and so we think that that must be important.

Matt Kaufman:

I want to add one thing to that, which is that there are a few different theoretical outlooks in terms of what matters in terms of coordination between neurons. So, there are some folks who think that the key thing is assemblies, that it matters that this set of neurons fires at the same time or with small delays, and then this other set of neurons fires next, that it's really about the set of neurons firing and the usual assumption there is that those neurons are wired more strongly together. There's an alternative hypothesis, which is the idea of sequences that it's really this neuron and then this neuron and then this neuron and its sequences that preserve information, maybe there are forks in what the sequence can be that are implementing the processing that an area has to do.

And then, there's this notion of dynamics where you say, "Okay, each neuron is sort of related to an underlying state," but exactly which neuron fires, it really doesn't matter as long as the state comes out correct. And it's the trajectory of the state over time that is controlling things because if two neurons are projecting downstream, the downstream target doesn't really care whether neuron one fires and neuron two fires are both fire a little, doesn't matter. So, those three different ways of looking at the world, I think, lead to different experiments that you would do with these mini neuron control systems.

Matt Angle:

Although it seems like this is essentially, most people, I think, would agree that all three of those things are true. I mean, they're definitely not mutually incompatible.

Matt Kaufman:

Yeah, they maybe they may be compatible. The mathematics of that haven't been worked out. It may be that different ones of these principles are more important than different brain areas, right? The brain is not one homogeneous mass. Different brain areas really are different. So, figuring out which of these principles applies where people... There are these histories of approaches in different systems, and part of that depends on the tasks we have, the animals do. Part of that depends on what recording modality is most prevalent in that area. There's really a lot of historical baggage in terms of how people approach different questions and which approach is right for any given area, for any given ecological problem when an animal is navigating versus sensing something faint versus making a decision versus doing some complex motor task. We really just don't know.

Adam Packer:

I think, yeah. Also, just to riff on, on the important of spike timing and what do we need, I think it's always just a question of precision. We know that the timing of spikes matters at some level, right? If an auditory stimulus comes in and I don't respond for 10 seconds because my neurons don't respond over that time, scaling time, that's going to be a problem. So, we know there's a limit, like you said, a sort of, can we bound in the extremes? It's just a question of exactly that, what level of precision is it that matters? And I think, this is why we need to have these tools that allow us to take control at increasing levels of precision until we find one where we can take control of the system, inject spikes beyond the level of precision that is needed, and then jumble them up and say, "Oh, it no longer matters."

At sub millisecond precision, we don't see a difference if we jumble the spikes at that temporal scale, and that's what we need. And I think, this actually comes back to something I wanted to riff on much earlier, which is how much control do we need? How can we fight the system? Because it's something that Matt Kaufman here said earlier, which is, we don't want to be fighting what the brain is trying to do. The natural, low dimensional evolution of those dynamics, the trajectory of neural activity across the thousands and thousands of neurons is in some sense, I don't know if hardwired is the right word, because I don't mean hardwired from birth. I just mean that the structure of the circuit, the connectivity of the structure only allows certain amounts of activity to evolve. For example, again, you can define the extremes that you know are not possible.

It's never true that all of the neurons are not spiking at the same time. It's never true that all of the neurons are spiking at the same time. So, there are nodes of this kind of the possible patterns that are possible on those that aren't. And the question is, can we take control of the correct amount of it in the correct way that respects the built-in dynamics, right? The structure of those dynamics that are in some way, instantiated in the connectivity in this circuit and the way the dynamics evolve. So I can tell you from our experiments, when we whack the brain with, what we think is a pretty hard stimulus, right? We hit a hundred neurons at once in 10 milliseconds, in a small area, in a chunk of the brain that we think does kind of one thing, one part of somatosensory cortex that responds to inputs on given whisker. the brain shuts that down immediately.

It's like in no time at all, it's like, "No, no, we're not doing that. You didn't tickle me in the way I like to be tickled. We're not doing that. We're doing the thing we said we were doing a second ago. I don't care what you say." And that means we don't know the dynamics that matter. We have not hit on the correct. We have not found the announcer in the football stadium that tells the neurons, "It's time to do the wave," right. We can take control momentarily because we're over-driving these neurons essentially to do something they're not wanting to do at that moment. We now have the power to do that, but we haven't yet found the key that unlocks the natural dynamics of making things unfold in a certain direction of neural patterns happening that were not going to happen before we turned our lasers on.

Matt Angle:

How do we search that space, given that the number of possible inputs is very, very large? Matt, do you have any insight into what sort of optimization techniques you have to use in order to find the right way to tickle?

Matt Kaufman:

Yeah. I think this is something where hypothesis-driven science is the way. So, if you have these different hypotheses, that different theories have advanced, different analyses of ongoing activity have advanced in terms of sequences and dynamics and assemblies and all these different models, I think what you do is you record and you see what the brain likes to do. You, first of all, try to just straight up, recapitulate that, and it may be to take this football stadium wave analogy a little bit further. It may be that the right thing to do is to take one little pocket of people that you control and make them all do the wave at once, or it could be that really the right thing to do is to take one section of the stadium even though you can only get them sparsely, have a group of people in a sparse checkerboard all do the wave, or the right thing to do might be you get half of the people you can control to do the wave at this second.

And then a second later, you have the one chunk over so that people know which direction it's going. These are different hypotheses about how you might exert control on the system.

Adam Packer:

That's the beauty of neuroscience, right? We have all these different researchers trying all these different things. We've got people like me and Ian who are trying to do sparse, but highly specific control. And you've got the people using one photon technology, or electrical stimulation in this particular way to do the more broad but less specific, less exact. And I think this is why it's so much fun to do neuroscience because we don't know what the correct answer is. We got to try all the things until we have some inkling of what is the way forward and we need the theoreticians. We give talks, I don't know, Ian, you, sorry, jump in here, but Ian has probably had this experience where if you give a talk to a theoretician, you say, "Hey, here's a field of view of 500 neurons, and they're doing this thing," and you say, "Tell me which to tickle." I can tickle anyone you want. I can do a hundred at a time every 10 milliseconds.

You say, "Which ones do I need to turn on to make something happen in those neurodynamics a second from now?" Every theoretician in neuroscience right now will give you a different answer about which ones to tickle based on their own model. And I'll tell you what? We got to try them all until we figure out who's right.

Ian Oldenburg:

But I'd also add from our doing the wave in the stadium analogy that I don't think that any of the strategies that we've just suggested will actually cause the wave to happen, right? Because if you get the whole stadium to stand up at once, or even a whole section to stand up at once, that's not a wave. That's a group of people standing up. That's something different. If you get five people who are all be next to each other to stand up, even in a sequence, someone three people away aren't going to do it. If you have 50 people throughout a section of the stadium, standing up synchronously, that's not a wave either. And whether 50 people standing up in the right sequence are going to cause the wave to happen is not guaranteed either.

So I think, the space is good, but even in places that we understand exactly what the idea and the principle and the psychology behind doing the wave is, it's still not straightforward that how often and how reliably you can cause this to happen. And if you're not hitting exactly right, my best bet of how do you actually get the wave to happen other than having an announcer is to have a group of people somewhat distributed through an area standing up in the right sequence. But if you don't know that sequence and you don't know those rules and you don't know who can see who, you also should be sort of downhill. If they're starting in the back of the audience, it's not going to work. If you don't know these rules, it's a very large space that you're going to have to go through.

I mean, I think our best bet is just to try to mimic exactly what we see and see what... If you replicate exactly what you see to the best of your ability, how much of what you expect to have happen actually results? If you, to the best of your ability, try to recreate people standing up, do you start a wave? And what are the rules? And this is my idea of how you build up the intuition of knowing what experiments to do to actually see how you cause a wave or actually see how you replicate neurodynamics is you start with what you can see and then you build off of it. But that segues to the limits of what we can see.

Matt Angle:

Well, we actually have another podcast with Elizabeth Hillman and Mark Schnitzer and Jacob Robinson talking about the limits of what we can see. So, we won't go into that much, too much today, but maybe if I could ask as we kind of get to the second half of this, if we could start to prognosticate a little bit. Adam, you talked about how the engineering aspects of this continue to get better. We get better lasers, faster spatial light modulators. Deisseroth Lab and Boyden Lab have been cranking out better opsins. Where do we think some natural limits are? Where do we think we're going in the next couple of years? What's the bottleneck? Is there one particular bottleneck? Can you start to paint us a picture about what the next five years might look like?

Adam Packer:

So, I think you're definitely going to start to see more and more control over the circuits in terms of the pure numbers. How many neurons can you go to stimulate over what area at what depth? So for example, recently, there was a paper from Alipasha Vaziri recording a million neurons. And I already told you the limits of what we can photoactivate per second is maybe 10,000 spikes distributed across different sets of hundred neurons at a time kind of thing. And those limits will get better. I think, one thing that will open the door, that's going to change things quite a bit is voltage imaging. So, I mentioned a sort of Achilles' heel with the way we're reading out is calcium imaging.

If we use direct read out of voltage, we can get better precision of reading in order to understand what we need to write back to test theories of how much that precision matters. So, we're going to start to see better read out. Those are going to start with clever strategies of doing small numbers of neurons at a time, because voltage is a less strong signal but happens less for over a shorter period of time than calcium. So, it's fundamentally harder to catch. I think that's going to be one big avenue that we go into.

Matt Angle:

And I think another Adam, Adam Cohen, has been and very instrumental in pushing.

Adam Packer:

Yeah. He has a beautiful paper on all optical electrophysiology where they're actually controlling the voltage and reading the voltage more accurately, but they can only really do this in slices of tissue because it doesn't work with two photon excitation in vivo very well currently, but there are a lot of moves and generation of new tools that are enabling them. Some of them we're working on the lab right now, actually. But I think what we need to do is not get more neurons over more time with more precision over larger areas. We need better hypotheses about which neurons to target. We need to know which knob to turn. We need to know who's the announcer in the stadium and what is it that they need to say. If they're in England, they need to say, "Let's do the wave." If they're in another country, they need to say something else that makes those people do the wave.

So, that's the part that we don't know and I think, this is where we need to up our game in terms of our ability to on the fly drive sequences. So, we can start to say, "Okay, it's not just, I want to hit these 10 neurons because I saw them doing something on a trial 10 seconds ago. I want to change the dynamics based on what I see right now. We need to feed that back into the system." So, these get at the control theories that I think Matt Kaufman here is an expert in. So, I'm going to segue to him in a moment and hope he can tell us what we actually need because going back to your question of what percentage do we need to take control of? I think we don't really know because we not yet respected the dynamics of the circuit.

So to me, that's where we really need to go is we need to understand what those manifolds are, what patterns are possible in the neural state space, and how can we manipulate those to test theories about in a causal way, which ones of those matter. And so, it's not just a numbers game, milliseconds, number of spikes over what cortical area. It's how carefully and how faithfully can you replicate or manipulate the dynamics of relevance, which we don't yet have a complete grasp on.

Matt Angle:

Do you think there will be an increase in the numbers though? For an instance, if we kept the options the same, I imagine that a certain point, Adam gets the doctor evil laser and we can pump even more energy in, but obviously, there's an energy limit there. Are we limited right now by energy, since we have the right engineering tools, but we need too much energy so we need to work on the transducer side or are we engineering limited? If you can kind of shout out to the world right now, who should get their ass in gear and get three times better so that our whole system will get three times better?

Ian Oldenburg:

I thought opsins. I think, based on our stuff, looking at SLMs, we can, in our sort of way is not as efficient as maybe some others. We can get 600 points easily, maybe even a thousand points without having any reduction in the contrast. So, the point that you're trying to illuminate is maximally bright with no spillover in a point that you're not trying to get. So, that would mean a thousand neurons per instant, per millisecond, is totally within the ability of the SLM. The SLM might not be able to take the heat necessary, in our hands, it's not entirely clear where things break down, but it doesn't seem like it's the SLM that's the problem. We run out of laser budget faster than we run out of anything else, but that's most easily solved by having better opsins that require a lower laser budget and where you can get better prevalence, because when you're infecting an area with opsin, you get some percentage of your cells are the ones that got lots of copies of the opsin.

And so, they're the most stimmable. And some, you can see there's a little bit of faint red indicating that that cell has the opsin, but it's not really stimmable in your hands. So, you double or triple or 10X the conductance of your opsin and you've solved all of those problems.

Matt Kaufman:

If the cell isn't too sick to work.

Ian Oldenburg:

Yeah. So then, that's the other question. It's like, how much better can we get on our opsins? So obviously, there's the trade off between speed and conductance. A slower opsin, all other things... An option where the pore closes slower will conduct more than an opsin whose pore closes faster, all other things being equal. So, you can slow down your opsins and use less laser power to get them to be activated. Obviously, I don't like that idea for various reasons that we've discussed, but it works. So, how far forward? Make a bigger pore, make it more selective for cations, do something different. These are all not solved questions, but it's sort of telling that all of the existing opsins are in the same ballpark of conductance. Everybody who optimizes an opsin for two photon, you're in a one to two to three nanoamps of current fluxed. That's about the upper limit that anyone's shown.

No one's shown a 10 nanoamp opsin, even in one photon. I think people have been close, but certainly, no one's shown a hundred nanoamp. Other problems when you get to super high…

Matt Angle:

I guess, it's hard to build a channel protein that has a huge amount of conductance and can still be selective. I mean, you could build a optically gated gap junction, and just open a hole in the cell, but if you want ion selectivity, that's almost always, essentially, it's a…

Ian Oldenburg:

You need to be limited by the size of the ions, so that you coordinate them well.

Matt Angle:

Yeah.

Adam Packer:

I mean, I think just on the opsin front, while we're talking opsins, I think what we really need is something that's more spectrally exact. So, in response to precisely the color you want and not another, so we can have multiplexed channels for reading and writing. And when I mean writing, I mean turning on and turning off, because we'd like to be able to do both in the same experiment.

Matt Angle:

You're fighting a losing battle if you want to use multiphoton though, because multiphoton is, by its nature, very broad absorbing. It's very hard.

Adam Packer:

Yeah. No, it's going to be a challenge, but I think the more we tighten up the spectra, the better we're going to get, because we can then do this better multiplexing, because that's what we're running into now, is that the stronger and stronger red opsins, they start to respond more and more when you try to image at the same time, which is of course what we're trying to do. We're trying to read and write with light. That's all optical interrogation as we like to call it. So, to me, there's a fundamental limit there. And I think, we're also always pushing the SLM manufacturers to build us bigger SLMs with smaller, more exact pixels, so we can shoot light over larger areas and things like that while maintaining precision and speed at the same time. We want bigger and better lasers, so that we can do more.

I think we're not really at the limits of the amount of power we can dump into the brain, usually because we want to inject a pattern of activity for a short period of time and then see what happens. So, we're not leaving the light on all the time. The duty cycle is low. It might only be on for one-tenth or one-one hundredth of the time of the experiment. So, you can actually afford to dump in more. So, I think we're not really at those limits yet, but I would argue that what we really need is to know which neurons to target. That's a fundamental limit in terms of designing a good experiment to test these various theories of what actually matters to brain dynamics and generating motor action. Something like that.

Matt Kaufman:

I now have concerns that Adam is reading my Grant Notes.

Ian Oldenburg:

Well, I totally agree with you, Adam, that what we need is better neurons, but I'd also say I'd rather…

Matt Angle:

Matt, why aren't you doing your job? Here, we have that sort of genetic engineering community making great opsins. You have Ian and Adam delivering the light and just seemingly sitting on your hands.

Matt Kaufman:

Someone wants to give me money for a holography system, I am all in.

Ian Oldenburg:

If I got the money, I'd give it to you.

Adam Packer:

But this is…

Matt Angle:

Ian, you can just give him the holography system.

Ian Oldenburg:

Yeah.

Adam Packer:

This is actually a key point that, actually, is one of the things we've tried to get money for and had trouble with is dissemination. We just need to get these tools into everybody's hands. And unfortunately, it being cutting edge stuff, it's really expensive, right? And so, we've collaborated with companies to get it into their microscopes, that they can industrially disseminate. It's everyone's not trying to build their own, but the other thing you need is you need the protocols, how to run these experiments. It becomes extraordinarily complex. Which neurons to target, and which millisecond and coordinate all the equipment that you need to do that. The laser, the SLM, microscope, all of this. And so, we've got our code on GitHubs. You can do that. We're trying to pump it out there and get people to do it.

But the other big problem is expressing this stuff. People don't always talk about this, but Matt alluded to it earlier that make sure the neurons are healthy. You got to get these things that are not endogenous to the tissue, the opsin, and the indicator in there, and the cells need to be okay. So, we need people to be developing the transgenics, so that you can just get a mouse from Jack's Lab or whomever, the Allen Institute. You just get that mouse that comes pre-built with the stuff you need to read and write neural activity. You get the microscope from the company, you put them together, and you can do your experiment. And it's not that plug and play yet. And that's actually what's keeping people who have great ideas about there, about what nodes we need to control, who's the announcer and what language does he or she speak? We got to get it in their hands so they can test their ideas.

Matt Angle:

A pattern that's come up in a few of these podcasts has been the modes of collaboration between scientists and engineers of different disciplines. And I think, this is a great example because you have some non-trivial genetic engineering and kind of gene expression to work out. You have advanced non-linear optics, which are also non-trivial to build in the lab. And then, you have this control theory, which is just not part of the training of most neuroscientists. And what have you found in your own collaborations? Is it best when you have people in their lanes working in their lanes? Do you feel like everyone has to learn everything? What's worked and what hasn't in your experience to get a big ambitious project like this working?

Ian Oldenburg:

I can answer from my experiences. So, I found that hand-in-hand collaboration has been the most successful. And the example that I like to use is on the creation of our optical delivery system. So, we had a very talented postdoc, Nico Pegard. He now runs his lab at UNC. And when he started in the Adesnik Lab, he didn't know the head from the tail of a fish. So, he literally made this mistake that he started imaging the tail of the fish, and couldn't understand why he couldn't find neurons, but he knows his optics really, really well. And so, what happened is we're talking about holography, we're talking about how to make it work. Do we do the spiral scanning or the everything at once?

Do we do the Emiliani tricks or whatever? and his first reaction is, what you want is impossible. What you want physically does not work. There's no solution, go home, give up, but then, we're talking and certain things come up like, "Okay, how big is a neuron?" Well, we don't actually care that neurons are each a different size. They're all in the same range. So, his constraint of being like, "Oh, you need to be able to make a spot that's just the right size for each neuron isn't relevant. We can use the same spot for everything. And these sort of problems and I'm concerned about. Well, but what we do need is a different amount of light for each cell because each cell has a different requirement.

And then, Nico's like, "Oh, that's not a problem at all. We can do this and give different amounts of power to everything." So, those are somewhat trivial examples, but they're over and over and over and over the course of that collaboration. The things that he thought was impossible, I could write off or fix on my own way. And the things that I thought were impossible, he could write off or fix in his own way. And so, we were able to advance something forward that neither one of us could have created by our cells that neither one of us could have created by ourselves and only really could happen, not because we exchanged emails once a month or what happens with some of my other collaborations, but because we were sitting next to each other and we would go out to the bar after work sometimes and generally would talk all about the project in many different ways. And so, I'm a strong advocate of these hand-in-hand collaborations where you're really there with the other person working on the same goal. 

Matt Kaufman:

I think there also has to be skin in the game. You're working with a theorist, if they've got 15 problems they're working on and yours is a little bit stuck, their attention is going to go to the other 14. You need them to care that your project succeeds. Same thing as, if you're the analytics person and you're collaborating with an experimentalist, if they're not that motivated to get the data, or if some other project is hot right now, then you just have to have people's priorities aligned.

Matt Kaufman:

I think it is also important that everybody has at least an outline of an understanding of the other parts of the project exactly for the reason that Ian said, which was that when problems come up, you want everyone to understand them well enough that someone can jump in and say, "Oh, I can definitely do that. It'll involve this compromise. Can we accept that compromise?" Everybody will have a different idea on it of what's most easily addressed experimentally or analytically or optically or behaviorally or genetically, whatever. If everybody has at least the dim outline of the project, it gets a lot easier to figure out what you should be solving and take off another collaborator's plate.

Adam Packer:

And I think this is what makes neuroscience really hard because you have to be a little bit a jack of all trades and master of one. So, it's a little different than the normal analogy: jack of all trades and master of none is what often happens. And I think jack of all trades in neuroscience is obvious; you need to know anatomy, genetics, you need to understand electrical activity of neurons. You might need to understand, depending on what problem you're after, a little bit of dynamics and things like that and then depending on what techniques you're using, if you're a theoretician you're going to need some hardcore maths, if you're a microscopist you're going to need to understand some optics. And so you have to be able to speak multiple languages, and I think it's also critical that each person brings a certain kind of extreme expertise in one area.

Adam Packer:

I've worked with theoreticians before that have deep mathematical intuition, but maybe not as much about biology or anatomy but it doesn't really matter. Or they don't understand what we can do with light. But again, they don't need to know that, they just need to know, we can hit this many neurons with this many spikes at this time. How would you test this particular theory of how you think the brain works mathematically? That's where you need to get together. And I think that's one thing that I always found daunting in the beginning like, how am I ever possibly going to understand everything there is to know about how an action potential is generated and also nonlinear optics and also low dimensional manifolds of high dimensional activity?

But you choose one and you specialize and then you make sure to have these good hand-in-glove collaborations like both Ian and Matt mentioned so you bring those different expertise to bear on a particular problem.

Matt Angle:

And suppose there are people sitting in their silos today watching this and just thinking, "Oh man, I have a piece of this but I'm a deep expert in one area but I'd love to learn, Matt, I would love to learn more about how the brain can be modeled as a dynamical system." Do you recommend an entry point for them so that they can get to a level where they could approach you as a collaborator?

Matt Kaufman:

Sure. So, there's several good dynamical systems in neuroscience, kinds of reviews that have come out of the lab. I did my PhD in Krishna Shenoy's lab, as well as a couple of things that have come from John Cunningham and Mark Churchland and Byron Yu. As more and more people have become interested in this, there've been a series of nice reviews on this that are accessible entry points, but you also just have to figure out what is your question? What is it you want to do with this? There's a little bit of a problem of–

Matt Angle:

What if someone reads Kalman Filter and their head starts to explode and they start going to Wikipedia to figure out what it is? What if someone's at that level that they understand visually the pictures in the review, but they want to learn a little bit more of the basics? Is there a place they could go or... ?

Matt Kaufman:

If it helps, that's me too. If you ask me to derive a Kalman Filter at the board, I'm in trouble. Not going to happen.

Matt Angle:

That's really helpful to know.

Adam Packer:

Yeah. But I think Wikipedia is a great place for this actually. You'll get to the point some nonlinear optics topics maybe get really deep, you'll find a limit where it's no longer really helpful. I haven't looked at it recently but the Wikipedia article on two-photon excitation is probably pretty excellent by now.

Matt Kaufman:

It's a lot better for some things than others. For anything mathematical actually I find Wikipedia... I often start with it but it's often pretty inaccessible. You'll do actually much better googling for course notes. Somebody who's covered it in a course, someone will have written good course notes on it. And that can actually be a really great entry point to almost any mathematical topic.

Matt Angle:

Holography, Ian, if someone's thinking, "I want to up my game, I want to learn about holography," where should they start?

Ian Oldenburg:

So, definitely there's the Emiliani Lab runs their holography course every year, which I think is pretty solid. I haven't attended it personally but I know some people who have, that was good. Looks like Adam might be getting a book that might save me.

Adam Packer:

Yeah, I was just going to see, I have a great a book that I really like on optics by Eugene Hecht, H-E-C-H-T. I can send you the details for the notes on the podcast. And I was just going to see what he has on holography, because usually he gives an intuition, he gives an example, like an image of what it actually is from some experiment, and then he often goes a bit deeper into the mathematical derivations and things behind it. So you can go as deep as you want, and the examples are often really good. So yeah, that's something that I would recommend is get a good textbook, which I know sounds kind of old school, especially since I also said look at Wikipedia, but you kind of need both.

Ian Oldenburg:

So I've also learned about new fields from going to like Coursera and other online really college level classes. And I did... When I wanted to see... So all of my optics background is self-learned; it's learned in various labs, I've never taken an optics class. So I really wanted to know, what do I not know? What do I not know? So I went and I spent most of my time on fast-forward, but played with a Coursera course on basic optics and Fourier optics and many of the other things.

And it's actually surprisingly good at getting you, especially if you have some other experience, it gives you a good foundation. It fills in some of the gaps. It's like, oh, I know I need to do this for this reason, but, oh, here's the mathematical derivation of that; or here's the formalized way to say it; or even just here's the library of terms that you can use to say what you mean that someone else will understand. So even if you are self-taught like I am, you just want to say, "oh, this is in the that plain," or, "this is in this plane," or, "this is that shape of wave propagation," just having the words to communicate what you might even already know is really useful. And I went to Coursera to help me fill in some of those gaps.

Matt Angle:

Thanks. We're getting toward the end of this, and so I wanted to, as a closer, suppose that the capability existed to access 1% of cells in any brain area of your choice, and you're the only person with access to this. There's a lot of responsibility on your shoulders. What would you do? And it could be a research application, or it could be a therapeutic application. What would you do if you were the only person who could do this? And humanity was relying on you to get the best bang for its buck.

Adam Packer:

Hold on a second. This is a little bit too open-ended. So you can choose any brain area and you get to read or write, at whatever level of precision you choose, 1% of the neurons and how much money do you have to do that?

Matt Angle:

Let's let's effectively say this is a sort of Manhattan Project-style thing, so-

Adam Packer:

Oh, yeah, bring it! Okay.

Matt Angle:

So, yeah, you can-

Adam Packer:

Now I can answer.

Matt Angle:

Yep.

Adam Packer:

Okay. I'm ready. Do you mind if I jump in first?

Matt Angle:

Sure.

Adam Packer:

I want the claustrum. So this is a part of the brain which we're studying in the lab, so we're already trying to get at this. But it's a part of the brain that, in humans, is the sheet buried between the internal and extreme capsules. It's underneath the insular cortex, it's bilateral, it's got a funky shape, and we have no idea really what it does to make a bold and I think pretty supported claim.

We don't really have a great theory of what it does because, to begin with, we don't really have any humans that have lost exactly and precisely their claustrum on both sides. There's some hints, some different infections that have maybe gotten there and strange things happen, but it's different in every case and it's very rare. So we really don't have a good idea what it does, but we're starting to get some tractability with these modern approaches, similar to the ones we've been talking about today, whether it's electrophysiology, calcium imaging, whatever it is, to get at optogenetics, to record and manipulate from this area.

And I think if I had the infinite amount of money, what I would love to do is just, first of all, record those neurons while the animal does everything, everything from birth to death. I want to put it through an Olympics course. It's got to do the four Fs: fighting, feeding, there's a few others. And I want to see what happens to those neurons when that happens. And we've made, since we don't have Manhattan Project-level money, we've made some hypotheses about some good things to try with our limited pot of money we're going through some of them.

Matt Angle:

For such an unlimited budget and potential for impact, that's a very risky bet. It could turn out that your favorite area of the brain is just involved in sphincter control during farting. And that might've been a-

Adam Packer:

Hey, you know what, at least now we know I'd die happy. If that's what it does, that's fine. You know, it's-

Matt Kaufman:

There's clinical applications of that.

Adam Packer:

Yeah. Control that. But look, I think the other thing I didn't mention about this area, it's highly connected to many other brain areas. Maybe it's this node we've been talking about, maybe it's the announcer that controls the state of... you don't know until you check. It could be this keystone structure. And I think for me that's enough. Even if it turns out to be some vestigial thalamus that's no longer needed, which maybe that's true, but evolutionarily it's pretty deep. It's pretty conserved from reptiles to humans. It's got to be doing something, and I'm just dying to know.

Matt Angle:

If it was involved in controlling the appendix, the neural function of the appendix…

Adam Packer:

That's what keeps me up at night. It could be vestigial, but hey, you know what, at least we'll know.

Matt Angle:

Matt, what about you?

Matt Kaufman:

All right. So I'm a little fuzzy on exactly what the limits of technology are for purposes of this thought experiment, so I'm just going to pick some for myself. So first thing I want to do is I want to push this notion of dynamics until it breaks.

So I want to simultaneously write in an image in a bunch of different areas. Don't have to do them at the same time; this can be one at a time. I want to really write in patterns that are consistent with activity we see that are inconsistent with activity that we see; want to understand whether low dimensional dynamics are something that we see because we're just not sampling enough behavior or whether this is really something that is wired into the circuit. I want to play with sparse versus dense stimulation and try to understand what the feedback mechanisms are that control how active cortex is and whether these dynamical patterns are limited to be sparse. I want to mess with that juxtaposition of theories that I mentioned earlier, this idea of assemblies versus sequences versus dynamics. I want to test them all out and see what fits. I want to try pushing activity in ways that hopefully will trigger execution of whatever the animal currently is thinking of doing or preparing to do. I want to tag projection neurons and figure out how we get patterns into those projections that other areas can use. I want to do simultaneous recording and activation, different areas, push on these output-potent, output-null dimensions that I had looked at previously, and that other groups have followed up on and see how areas coordinate their activity. I've got a lot to do in cortex.

Adam Packer:

You want to do all of neuroscience.

Matt Kaufman:

Yeah.

Adam Packer:

You just take all the money being fed into it worldwide and give it to you and we'll be done.

Matt Kaufman:

I don't know about done, but I'll be happy and busy. There's so many questions you can ask and hopefully answer if you've got the ability to really write in what you want to write in and to read it out and to monitor the animal's behavior. Like does the dimensionality of the neural manifold, does the structure of the neural manifold, do these things change when you train an animal to do one thing or another? There's really just no end of questions and looking at differences between sensory and motor and association areas. There's no end for me.

Ian Oldenburg:

Me too. I agree all the same things.

Matt Angle:

What if you could have all of this capability with a tiny grain of sand, and you could do it in humans, and there were no regulatory restrictions, would you still carry on with your existing research programs or would you think, "No, there's something moving toward a therapy that I would want this for?"

Adam Packer:

Locked-in syndrome, that's what I would go after. That strikes me as the most horrible thing that you could possibly imagine happening to... I mean, there's all sorts of horrible things; it struck me as one of the most horrible things. If you could understand how to unlock that, if you could... You say you can take control of 1% of neurons, I would take some of those patients and I would try every 1%, see if I can unlock them. Think about it, that's just... Yeah, that's what I would go after. Of all the debilitating things I can imagine, that's one of the worst.

Adam Packer:

It's maybe not the biggest. I guess you could also go for the biggest. You could go for... there's not all that many locked-in patients, so if you think of numbers of patients you could help might be another good way of going about it.

Ian Oldenburg:

If we can get 1% of neurons and there's less than 1% of people who have locked-in syndrome, we can control all of their neurons, right? That's the-

Adam Packer:

There we go.

Ian Oldenburg:

For therapeutics, I still don't think we know enough of what to do to really create good therapeutics. So the one thing that I would add on to all of the great ideas that Matt just had is that there's a huge world in how nuclei talk to each other. And so one that I'm interested in is understanding how the cortex in general talks to the basal ganglia and the subcortical structures. So I think there's a lot of stuff to be done understanding how neural codes talk to other parts of the brain.

So I did my PhD in a basal ganglia lab. We would study the striatum a lot. The striatum is critically involved in a huge number of diseases, you've got everything from schizophrenia to Parkinson's disease and many in between. And one of the big limitations on understanding what the striatum, what the basal ganglia is even doing is that we don't have good control over what's talking to it. We have neither control over what's going into it nor control over what comes out of it. So you need approaches that allow you to take direct control and take control at the level in which it's written. We need to control activity at its natural endogenous scales and manipulate it to be able to say, "Okay, this is what's important. This is the receptive field of a sub-cortical little structure. This is what this cell actually cares about."

And these are actually tractable problems. We don't need a Manhattan Project-level thing. We need a lot of concerted effort, many different techniques, theory, experimentalists, optics people, optogenetics people, whatever, we need many people working together. But it's not out of the range of what's doable to say, "Okay, we want to understand this nucleus. We want to control all of the inputs at cellular resolution and millisecond precision to this nucleus to say, 'Okay, this cell cares about that.'" We can write the transfer function between cortex and striatum. We can do that now with technologies that exist now. We just need to do it. So that would be step one, and then multiplex out to every nucleus, to every nucleus from there.

Adam Packer:

And if we're going after humans, we can go after what might be circuitopathies, so some pathology of the circuit. So if you think that schizophrenia or autism has something to do with these dynamics we've been talking about, it's not just that there's something wrong with one particular receptor or one particular neuron type, but that that leads to a fundamental dynamics-level problem that the unfolding of activity across different neurons across time does not happen in the optimal sort of way. If we can do all these technologies in humans, that would be, I think, a really exciting one to go after because you could directly test these ideas by trying to see how these dynamics unfold in patients that either have that particular disease or not.

Matt Kaufman:

All right, let me float a few things on "if we could do humans." One of them is something being done by a couple of collaborators of mine, Sliman Bensmaia and Nico Hatsopoulos, where for brain-computer interfaces, it turns out that sensory feedback is critical normally to your motor control, and it turns out that there's only so well you can do if you can't implement that in a brain-computer interface for a paralyzed person. So I'm sure they would love to take this ability to write in and go ahead and use that for sophisticated sensory feedback. That's not my focus, but in terms of clinical applications, I think that's probably one of the better ones.

I think that there is a real limitation of our basic research knowledge. So a human brain has something like a hundred billion neurons. If you have just 10,000 neurons and you think of each one as being either on or off, which is not the true thing, neurons can be more active or less active, but just on or off, the number of possible states is 2 to the 10,000, which is about the same number as 10 to the 1000, which is to say it's 1 followed by a thousand zeros. That's the original definition of a googol. It's a lot. It's very hard to explore that space. You really do need to have theory that drives you to say, "Let's try this specific thing," because you just can't try out the combinations; it's just too many. I think a great deal of basic research in much the way it's adjusted doing it in a mouse, just do it in a human where it has greater relevance to us and where you can ask humans to do much, much, much more sophisticated tasks.

The other thing I want to float if you had humans is man, would I love to work on language. There are systems that people work on. There's this Costa Rican singing rodent that Mike Long works on. There's songbird of course, which there are at least a dozen really spectacular researchers in that world. But I would... human language, it marries first of all perhaps the finest motor control in the animal kingdom, it turns out just controlling your vocal apparatus is phenomenally difficult, but it marries that with cognition and motor sequencing and things we think of as being fundamental to what make us human. And if you could really get in there and both read and write in a human, language I think would be a phenomenal problem to work on both on the production side and on the semantic side. If you can write in the understanding of an idea; ask someone to think about something; look at the engram for how their brain is encoding that; write that back in and what you've observed in different areas and see which one they're like, "Oh yeah, now I am thinking of a beached whale" that would be neat.

Adam Packer:

I would use that. I agree. I would use that language as the report because this is often what we have trouble with understanding in our mice. So we tickle some neurons in the brain and we try to ask the mouse, "What did you feel?" And the mouse either licks at a spout or doesn't, that's all we get. And sure you can put two spouts and you can put a wheel, you can get all sorts of slightly more elaborate feedback, but you can't ask them what it felt like. And I think that would be just a beautiful thing to have in a human.

Ian Oldenburg:

I agree.

Adam Packer:

By the way, Mike Long is the first person I ever saw record action potentials from a live neuron, and you should get him to do a podcast if you can. He's great. He's a fantastic researcher. He sort of inspired me in some small part to continue in neuroscience.

Matt Angle:

We haven't talked about a songbird. I think we could do a special on songbirds as a model for language. I think that would be quite interesting.

Matt Kaufman:

Yeah. Get Eddie Chang while you're at it, who actually does some language stuff in humans. Obviously not with a full access calcium imaging and optical stimulation, but…

Matt Angle:

You know, we asked Eddie to be on the podcast, but he didn't come.

Adam Packer:

What about Tim Gardner? Is he at Neuralink now?

Matt Angle:

Yeah, Tim Gardner. Michale Fee would maybe be good as well.

Adam Packer:

Yeah.

Matt Angle:

Well, Matt, Ian and Adam, thank you so much for your time.

Matt Kaufman:

This was fun. My pleasure.

Adam Packer:

Thank you for having us.

Matt Kaufman:

Yeah.

Adam Packer:

This was great. Enjoyed it.

Follow us

Have questions?

Get in touch
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.