We’re back with Part II of our two-part series on Connectomics!
In part one we speculated on the legal and ethical implications of emerging technologies in the connectomics field. In part two, we don our lab coats and take a deep dive into the latest research tools, from fixation protocols for the preservation of neural tissue, to multimodal imaging techniques, to the machine intelligence designed to interpret massive data sets and reconstruct the vast neural circuits that make up the connectome.
Our guests are:
In this episode, Ken and Robert from part one return to the pub, and we are also joined by Jeremy Maitin-Shepard, an engineer and researcher at Google, who shares insights into some of the machine intelligence modalities being used to decode previously uncharted neural networks. Check out Jeremy’s recent paper on BioRxiv, as well as his published work at Google.
If you missed part one, you can listen and explore the show notes here. Cheers!
Show Notes:
0:00 | Intro
1:03 | Kenneth Hayworth, PhD
1:12 | Robert McKintyre, CEO, Nectome
1:17 | Jeremy Maitin-Shepard, PhD
1:51 | Setting the record straight
3:09 | The nucleotide sequence of bacteriophage φX174
4:22 | Frozen Zoo at San Diego Zoo
12:01 | Glutaraldehyde and reduction techniques for immunolabeling
17:39 | SWITCH Framework
19:14 | Population Responses in V1 Encode Different Figures by Response Amplitude
Permeabilization-free en bloc immunohistochemistry for correlative microscopy
19:57 | Synaptic Signaling in Learning and Memory
Structure and function of a neocortical synapse
Engineering a memory with LTD and LTP
Synapse-specific representation of the identity of overlapping memory engrams
20:28 | Ultrastructure of Dendritic Spines
Structure–stability–function relationships of dendritic spines
24:25 | Reconstructing the connectome
24:32 | Connectomics Research Team at Google
24:55 | Google x HHMI: Releasing the Drosophila Hemibrain Connectome
28:38 | Serial Block-Face Scanning Electron Microscopy
29:22 | Automated Serial Sections to Tape
29:45 | Mapping connections in mouse neocortex
30:59 | A connectome and analysis of the adult Drosophila central brain
32:14 | Expansion Microscopy
34:37 | The future of connectomics
49:49 | Mice and rats achieve similar levels of performance in an adaptive decision-making task
Want More?
Follow Neurotech Pub on Twitter
Matt Angle:
Welcome to Neurotech Pub. We're back with part 2 of our two-part series on connectomics. If you haven't listened to part 1, stop here and go back to part 1. In part 1 of this series, we talked about the legal and ethical ramifications of brain preservation. What would it mean if you really could preserve a brain just before or just after death, and then allow for essentially a reboot of consciousness based on the preserved brain data? Today, we're going to talk about the actual state of brain preservation and the field of connectomics that involves mapping out brain circuits based on preserved brain data. As you'll see, there's quite a gap between the ideas that we mused upon in the first episode and the state of brain preservation and connectomics today, and that's what we're going to explore in this episode.
Two guests that you'll recognize from episode one are Kenneth Hayworth, the president and co-founder of the Brain Preservation Foundation and senior scientist at the Howard Hughes Medical Institute, Janelia Farm Research Campus, and Robert McIntyre, the CEO of Nectome. Also joining us today is Jeremy Maitin-Shepard, a computer scientist at Google Research and an expert in the reconstruction of neural circuits from volumetric electron microscopy data.
If you're currently listening to this episode as a podcast, remember that you can visit paradromics.com/podcast for detailed show notes, references, and more. Now, without further ado, I'm going to get out of the way and let you listen to these experts.
Kenneth Hayworth:
I'm sorry. I have this knee-jerk reaction of somebody jumping into a podcast. Part of the podcast is talking about preserving dead bodies, and part of the podcast is talking about how you actually do the imaging. And how is somebody not supposed to put those two together and say, "Oh, they're saying that bringing somebody back is going to be done by electron microscopy and the slicing methods that we have today." Whoever you are out there that is going to chop a little bit here and chop a little bit there and say they're saying that they're going to upload people by electron microscopy 10 years from now, you need to understand that that is absolutely not what is being said. We have some enthusiasts that are doing connectomics and using that technology to really push forward neuroscience and also figure out how well we can preserve brains today, and we have some speculation about 200 years from now bringing people back.
Matt Angle:
Ken, you've articulated it very well. So Robert, is there something that you would want to say with respect to your work that to make a nuanced...
Robert McIntyre:
Yeah, okay. I think the best analogy for what we're doing here is with genomics in the '70s. We first discovered what the structure of DNA was in the late '50s and we knew chemically what DNA is. And once we understood chemically how DNA was held together, anybody that really understood the chemistry of that could tell you that liquid nitrogen will preserve the linear structure of DNA. It's not going to somehow permute the nucleotide bases. It's not going to scramble that information.
And so I think the key reason why this whole idea of preservation is reasonable and should be talked about now is that it's almost always easier to preserve complex information than it is to do anything with it. We first became really capable of preserving genomes of species in the late '50s, and you can make a very, very solid scientific argument that this archives the information of heredity. You could make that argument while still remaining almost completely ignorant of what DNA actually means in a sensible way. But we understood enough about the DNA. We knew we could preserve it. We knew we could archive it.
And so there was a group called the San Diego Frozen Zoo that started 1972, just 12 years after we really understood enough about DNA to preserve this stuff, and they said, "Okay, let's just take species that are going extinct, another valuable genetic data, and we'll just preserve it." And they did that to their credit. And they're criticized saying, "Well, look. A genome is one and a half gigabytes. A human genome is one and a half gigabytes of data, and one and a half gigabytes of data is just impossible." The idea that you could have computer memory that's an entire gigabyte in the '70s is just absurd. It would be hundreds of millions of dollars. It would take hundreds of years to build. It's just a non-starter. And so you could criticize the guys preserving DNA saying, "Well, you'll never even be able to store this DNA digitally, let alone read it." And it'd be a pretty dumb criticism because clearly the biology has no problem storing a gigabyte of data inside a single cell, so it's just a matter of us not knowing how to do it. It's not really a fundamental constraint.
And so I think we're at basically the same point in neuroscience today as we were with genetics in 1972. We don't really understand a lot of the high-level stuff about how the brain works. Even something that seems as simple as when you're remembering song lyrics, how do you go from one verse to the next verse? We don't know. The theories on that are not very good. The high-level idea about how are memory is created, what's the meaning behind snapping connections, we don't know that very well. We're making progress in it, but it's still a mystery. But when it comes to the low level, what are the building blocks of memory? How does that work biochemically? At the level of a single synapse, how is it that a synapse changes to store information?
That, we've worked out really, really well. We're at the point where the debates around this are minutia of minutia. Essentially, we know the ink in which memories are written in the brain and that's synapses, and all of the theories of memory. The core theory of memory and neuroscience and the fringe theories of memory neuroscience are compatible with the idea of chemical fixation preserving memory, preserving all the information that comprised us. And so that's why it makes sense to talk about it now.
Matt Angle:
That's a great framing for this discussion. You've thrown down the gauntlet. You've said you think we're in a situation right now where we can preserve information in the brain with analogous fidelity to preserving DNA under liquid nitrogen. I think what would be really interesting is if we could walk through the process of how we chemically fix brain, how we prepare it for looking at it, where are we right now in our confidence that we can preserve all the information that we want. We can start there because I take your point that from your perspective, the most important thing is to lock down the information. And then I think we should also have a discussion about where are we currently in our ability to read it out?
Robert McIntyre:
Yeah, because that's important. And like the San Diego Frozen Zoo and this discussion's like it's happening in 1972, so it's very fair to say, "How do you know you're preserving the DNA?" It's much more speculative to say, "Well, okay. Fine. But how exactly will we store a genome in 50 years from now?" Me being able to successfully say, "Oh, this is how the design of memory storage will be," I might say, "Well, they've got this mercury sound wave thing that seems like a good scale up a lot. Maybe that's how they'll store it." Or maybe it'll be spinning this and I'll be wrong. I'll be completely wrong about how the memory will be stored. But it doesn't matter. I'm right about how the DNA is preserved. And later, we'll figure out how to store it.
Matt Angle:
Who would want to comment on the assertion that aldehyde-based fixation is sufficient to describe or insufficient to describe memories and brain function? Ken earlier put out a position that it may not be sufficient. I could think of–
Matt Angle:
... for instance a number of things that... Go ahead, Ken.
Kenneth Hayworth:
To connect them may not be sufficient.
Matt Angle:
Oh, I see.
Kenneth Hayworth:
I think to a certain extent that the listener out there, the main thing that they will bring to the table is a skepticism that this has any possibility of working at all. And so there are many... Robert has done a good job of many different objections. The objections on the preservation side and the objections on, well, we don't even know enough to know what would need to be preserved in the brain. Those are very concrete things that we should be able to address. Things about speculation in the future are less, just as Robert said, are less important.
So in any case, maybe 95% of neuroscience models are models that are talking about the brain from the perspective of the brain is processing information because the brain is filled with neurons which are excitable cells. Those excitable cells are excitable because they have ion channels that will open up under different circumstances, either with a voltage change of the membrane or with a neurotransmitter that is locking onto them. And the brain computes by those excitable cells talking to other excitable cells. Because if you want to build something as big as we are and have it respond in a few hundred milliseconds to something out there in the environment, you need to use those excitable cells. You don't have time to run off to the DNA and do some protein synthesis and then have that diffuse out of the nucleus into something. That would just take too long.
If I look at somebody across the room and recognize who they are and then remember that I saw them at a conference before, this happens in a few seconds or something. There is no doubt in the neuroscience community that is because of excitable cells talking to other excitable cells. So does glutaraldehyde preserve those things that are important to that type of structure? And at one level, we use glutaraldehyde to do preservation for electron microscopy and for a variety of molecular studies of memory and function. So there's a real, "Duh." Neuroscientists use this when they want to study those functions. The other thing is that if you look at the classes of molecules that are involved, ion channels are proteins. Excitable cells, the things that are defining what they are, receptors and other things, the structure, the connectivity, and those molecules that are thought to be really involved are all molecules that are not only known to be preserved by aldehyde fixation, but that is exactly how we do all of the studies on them today. Not all, but most of the studies on them involve aldehyde fixation.
Matt Angle:
Can I dig in a little bit here with respect to aldehyde fixation. If I want to fix a sample for ultrastructure, I'm going to use a high percentage of glutaraldehyde because that gets the best ultrastructural preservation that's commonly what's used for dense EM reconstruction. If I want to do immunohistochemistry, often I lose the epitopes. And so I'm wondering…
Kenneth Hayworth:
But you don't lose the epitopes. They are–
Robert McIntyre:
You can't access the epitopes.
Kenneth Hayworth:
You can't access them, that's right. The cross-linking makes it more difficult. Although there are plenty of studies that Robert and I can send you that involve immunostaining of glutaraldehyde fixed tissue.
Robert McIntyre:
Here's a useful thing to think about because immunohistochemistry thing is important. If you've ever done some of these laboratory protocols, you'll notice it's often fiendishly complex to properly label proteins in tissue. And it's even more complex to, for example, get mRNA out of tissue. And so sometimes people that are familiar with that have struggled with it for a long time say, "Well, how can you fix something with glutaraldehyde? Haven't you lost that signal?" Because that's what you see. If you have a piece of brain tissue and you apply an antibody label to it to try and find some proteins, if you fix it with the glutaraldehyde, you're going to get noise. You're not going to see very much. Whereas if you're very careful, you fix it with formaldehyde and do other kind of black magic to it, maybe you get a signal but it's dependent on that particular protein.
If you said that the reason you don't get a signal is because the proteins are gone, then you're simply just wrong about that. The proteins are still there. What has happened is the antibodies are gigantic molecules and they need to diffuse through this dense cross-link network. Formaldehyde makes a lightly enough cross-linked network that sometimes if you're careful, that the antibodies which are themselves proteins can get in. Whereas glutaraldehyde both makes a more dense network and it causes difficulty with diffusion, but also glutaraldehyde cross-links themselves sometimes open back up and will cross-link the antibody itself and stop it from moving. But the question of preservation does not concern itself with the ease of imaging. That's really important because all I care about from preservation is, is the information still there or not?
Matt Angle:
Well, I follow you, Robert, but here's where my mind goes to. And you know more about this. I'm really looking forward to being educated. And I agree with you, that let's say if we're thinking about constructing a connectome, I think most of us think that if you did proper fixation, eventually someday the engineering required to properly section the tissue, to properly image the tissue, and to properly reconstruct, that will exist. And it's like how long until the engineering works?
But I think when you start talking about, is my fixation and my preservation method valid? To some extent, if I write a message on a grain of sand and then I drop it into the ocean, the information on that grain of sand is preserved. It's there. I could be like, "Yeah, it's there. It's preserved. Go find it." But either the physics of getting it or the probability of getting it because of the size of the search space, if you construct a situation for yourself where you say, "Well, theoretically it's there, but it's inaccessible," that's not exactly the same as saying that you just have a lot of engineering ahead. And so for me, what I'd be really interested in, if you could parse out the difference, the scientific risks versus the engineering risks of and how we have confidence in the preservation techniques.
Robert McIntyre:
Yeah. So okay, what does preservation mean, first of all? Because that's a very important thing to think about. Essentially, you can think of preservation as taking biological systems that encode some bag of memories, like a person is a bag of memories and the preservation takes this living system, turns it to an artifact. And so the question of does preservation preserve information in general can be answered by saying if you had two inputs with different memories, even if they're very subtly different memories, and then you preserve them and turn the them to artifacts, are those artifacts still distinguishable? For example, making this a little bit simpler, talking about text documents, a text file on a computer. I preserve text documents just simply by saving them as files. I also preserve them by encrypting them. So if I encrypt a file, then the information is still preserved because if I had two different files that differed by even a single character and I encrypted both of them, then those two encrypted files will still both be different.
So preservation does not include any idea of the ease of going backwards, that then they're reasonably separate concepts. And I think it's important to keep them separate. You can preserve something and it can have varying degrees of ease of going backwards from relatively trivial to pretty complex but doable with better technology, to this is actually fundamentally encrypted and it takes more computing power than the universe to ever get it back even though technically it's preserved information. And so the question of we cross-linked this biological sample a little bit harder than normally is done with formaldehyde, it's hard to predict future technology. It is true that with some of the current antibody techniques for some things you can or can't label things right now. It's also true that all the proteins are still there in relatively the same place as they were when it's alive.
And so the main question to ask here as to whether you'd ever be able to access that information is is a denser cross-link in some way cryptographically secure? Is that a cryptographically secure encryption of the locations of these various proteins? And the answer is no. They're still there. They just have some cross-links around them. So if you want to estimate just how hard it might be to access that information, good rule of thumb is that if it's physically there, we will be able to physically see it. And in fact, you already can. If you look at the SWITCH protocol, I believe, from Murray or Chung's lab, I remember you say. I think that's right. They already showed how you can take densely cross-linked glutaraldehyde tissue and go in and label it with proteins.
But even in spite of that, if we didn't have any immunohistochemistry at all, if antibodies just didn't exist and we weren't very good at laser-targeting proteins, my argument for this preservation just wouldn't change. I would still say, "Look, the proteins are still there where they were, and we will figure out a way to access them if we need that information." The question is just whether you can preserve things or not.
And to a much more sophisticated degree, whether that was somehow encrypting the data or not, which it clearly isn't.
Matt Angle:
Ken, is that in keeping with your thinking, or…
Kenneth Hayworth:
Yeah, it is. I hear what you're saying. When I talk to chronicists at some points, they will essentially say, "Well, it doesn't matter if there's a few cracks in the brain. It doesn't matter if it was shattered. It doesn't matter if there was a whole bunch of dead cells, because the future will figure it out." That makes my skin crawl because I think of the poor person whose job it is to put this together. Maybe Jeremy can fill for this. It's like–
Jeremy Maitin-Shepard:
Oh, man.
Kenneth Hayworth:
It's like, did you not respect us enough to at least use the best technique that you had of the day? We want to get to the resolution that people think is encoding things, and let me be very specific about that. To first order, the connectome, this cell connects to this cell. There is a lot of arguments that say that that is where the main information. I can point to a bunch of papers that have decoded very simple functions like visual responses of V1 neurons and stuff from mirror connectivity. The next level is what are the sizes of the synapses? What are the size of the post-synaptic density? What is the branching of the dendritic tree? Clearly, all of this is being preserved with glutaraldehyde because that's how we image everything. That's what connectomics is doing today. And we have, as Jeremy will say, there's a lot of advances on actually reading that information off. There's really a proof of concept there. So now, what about the functional connectivity of a synapse?
Well, there's tons of papers that show that the key synapses that are involved in most memories, perceptual memories, hippocampal-based memories, procedural memories in the striatum, I could go on. Anyway, those are mainly evidence to be stored in excitatory connections onto dendritic spine. The vast majority of synapses in the cortex are excitatory on dendritic spines, and there's plenty of papers that show that the functional strength of a dendritic spiny synapse is proportional to its volume as seen by electron microscopy. So this is real evidence that glutaraldehyde without any molecular information that we can read it off today.
Now, let's go deeper though. If you had a real neuroscientist on, I'd like to think I am, but if you had a real neuroscientist on, they would say the real things that's causing the functioning of the cells are ion channels and their distribution in the dendritic tree, receptor proteins in the synapses. What about complex calcium spikes that are going up the dendritic tree? You might need to know what different ion channels are on each dendritic branch. What about the cerebellum which is known to have some timing information that is not stored in synapses? What about all of those? So if you look at all of those examples, what you see is particular proteins that are involved that are known to be preserved by glutaraldehyde. As Robert was saying, there are several papers. I'm going to bring up nanobodies as one thing because there, we can make smaller than antibody immunostaining that can do a better job of getting into a dense network.
There's two lines of arguments that you can make. You can make the argument that we don't know anything about what is really the functional things that are defining how memories are stored and how the function of the brain is set up. I think the argument is we know exactly what those are. In general, they're proteins and these larger structures. We know that these are becoming extremely available to today's imaging techniques. And so it is not out of the question at all to extrapolate into the future and say that this is not going to be some cryptographic. There's not going to be a writing on a stone that you throw into the ocean and are hard to... We are already able. We can find papers that show imaging on these things today, even with the glutaraldehyde-preserved brain.
Now, let me back up a second here. I'm sorry. I'm talking too much. But there is nothing that says that we have to use glutaraldehyde. This is one thing that should be, if the neuroscience community was doing its job and open-minded, it would be asking, "What is the best way of preserving for the future?" Glutaraldehyde is necessary for electron microscopy because you have to dehydrate and pull the water out in order to put resin in, and that will pull out the lipids and proteins. So you have to tie those proteins down very sturdily and you have to tie the lipids down with osmium tetroxide very sturdily so that they don't get ripped away in later steps. What Robert's technique, this aldehyde-stabilized cryopreservation, does is it stops at the very first step. It preserves with fixation and then stops.
I have not seen any argument that says that you couldn't preserve with less aggressive fixative, maybe a lower percentage of glutaraldehyde and more formaldehyde, or maybe just a protocol like paraformaldehyde that's compatible with expansion microscopy, preserve, and then do the cryo-preservation. And then 200 years from now when you rewarm, you can decide if you're going down the electron microscopy route and you want to reperfuse with glutaraldehyde, tie everything down, or you want to make it maximally available for immunostaining, or something that we can't even think of. There's a bunch of very interesting research around these questions.
Matt Angle:
Okay. Jeremy, I absolutely want to hear from you now. Jeremy, can you quickly say a little bit about your background here?
Jeremy Maitin-Shepard:
Yeah. I'm a software engineer at Google in the connectomics research team. I have a background in machine learning and data storage and visualization. I've created some widely used tools for working with large electron microscopy data sets. And my team has collaborated with a number of outside groups including Janelia on doing some of the largest connectomic reconstructions to date.
Matt Angle:
Awesome. Let's switch gears a little bit and, say, all right, we have preservation, and it's not perfect preservation because nothing is perfect in this world. Based on the work that you and others have done right now, how do we know how good we need to be in order to do good reconstructions? What kinds of errors are catastrophic? What kinds of errors are minor? What do you see when you're trying to put the data back together?
Jeremy Maitin-Shepard:
Yeah, I can try to speak about that a bit. I certainly agree with Ken's point, that it's hard at this point to predict exactly what the necessary imaging techniques will be far in the future. The major cause of reconstruction difficulties is when you have some defect in the tissue or imaging process that where you have maybe 50 nanometers or 100 nanometers gap over some, well, over a spatial region that's the diameter of a process. It's basically impossible to reconstruct that connection. But of course there's also a lot of redundancy in the brain, so it's hard to say how many of these individual effectively broken neurons could be tolerated without compromising the whole thing.
Matt Angle:
Do we have a sense of if we defined a box hole to be...let's say, that we define arbitrarily. We say each little unit is 50 nanometers by 50 nanometers or 100 nanometers by 100 nanometers. In a cubic millimeter of tissue, how many of those are we willing to tolerate and what's one glitch? How many synapses could that result in misattributing or just missing? I think it'd be interesting. A lot of people who listen, there aren't going to be many people who listen to this podcast that don't know a little bit about neuro, and I think it'd be interesting for people to wrap their heads around, to sum up the numbers.
Jeremy Maitin-Shepard:
Yeah. Well, I think it's this individual, a tiny defect or fleck of dirt or something in the present imaging processes. That usually doesn't cause a great problem because it's a uncorrelated defect. And so usually, there's enough information from nearby locations in the image to reconstruct the processes anyway with fairly high accuracy. It's cases where you have defects that are spatially correlated where you have crack in the tissue. The current methods that involve cutting, if there's a defect that comes from the cutting process that can affect a significant depth of 50 or 100 nanometers over a large cross-sectional area of multiple microns. That's where a lot of these hundreds or thousands of these reconstructed neurons are going to be broken.
Matt Angle:
If you could ask technologists who are either doing the preservation or doing the cutting or doing the imaging right now, what's the thing that you would like them to stop doing so that you could get better reconstructions?
Jeremy Maitin-Shepard:
Well, I think they're already doing quite a impressive job with preserving the tissue as is. There's already a lot of development in further improving techniques, so I'm not sure I can–
Jeremy Maitin-Shepard:
I'm not sure I have much to…
Matt Angle:
Where is most of your data coming from right now? I remember I was adjacent to connectomics 10 years ago when Winfried was doing his block-face SEM and Jeff Lichtman was doing the tape-based methods, and they have different pluses and minuses. First of all, maybe Ken, could you just quickly summarize for the audience the different ways that people currently get this data?
Kenneth Hayworth:
Yeah. In the 1960s, that was a manual transmission electron microscopy approach where people would slice very thin sections, about 50 nanometers, off of a plastic embedded block that has been heavy metal stained of the tissue, float that on a water bath, and then pick it up on a very thin substrate that they could stick into a transmission electron microscope and look at it. That technique has been automated with a tape-based approach where you have a whole reel of tape that's coming into the bath and picking up sections and then going out. And the tape has grids. It's called grid tape. Holes in it that can then go through an electron microscope all with automation.
So the 1 millimeter cube that Allen Institute did was using that transmission electron microscopy technique that really stretches back to these early days in the 1960s with just pure automation. You mentioned Winfried Denk. He came up with the technique where he could take a plastic embedded block, heavy metal stained. All of these are plastic embedded blocks, heavy metal stain tissue. Put it directly in an electron microscope, scrape 30 nanometers off the top surface, and then image the block face. Scrape again, image the block face, and you're getting the same level of detail. By the way, a voxel size that you should have in mind is something like 10 by 10 by 10 nanometers. It's the voxel size and structures that are the individual processes of axons and stuff. They might be about 100 nanometers in diameter. Individual synapses might be a half a micron in size. These are the scale that this imaging needs to get to.
So this block-face technique, imaging the face of the block, that's one. You can do that same thing with a focused ion beam. A lot of the work that Jeremy did great stuff on, well, several fly brain data sets. But one of the fly brain data sets was produced with focused ion beam where they took the fly and milled off about 10 nanometers off the top surface and then imaged that surface and milled off and imaged the top surface. And then the new things that are coming out that are still in development are multi-beam techniques where you've got multiple scanning electron microscopes that are imaging a surface, and then you can either put tape collected sections into a multi-beam microscope. There's 91 beams nowadays so they go quite fast, or you can do ion milling on thick sections in multi-beam imaging. That's a technique that we're developing right now. That's the landscape from a connectomic electron microscopy point of view.
Now, there's a whole set of stuff that's going on on the molecular imaging side that will also be able to eventually provide connectomics. The most immediate one is expansion microscopy, which I probably will just throw out there because I don't want to describe it. And another–
Matt Angle:
And Ed Boyden turns the brain into a diaper and then let's it swell up with water.
Kenneth Hayworth:
Exactly. And given the right fluorescence put in the membrane or other places, you could get connectivity and you get enough resolution because it's expanded. And there's a bunch of x-ray techniques that might be using synchrotrons, big ring synchrotrons that may also provide the resolution without needing to do a bunch of sectioning at all. So it's just a wonderful array of different techniques to map circuits at this level. And I think we're only just beginning. This is 2022. Why would we think that this is the pinnacle of brain imaging?
Matt Angle:
I really would quickly want to bring Jeremy back in for a moment to say what kind of data have you been looking at, and can you make a quick comment as to you were describing some of the things that can make reconstruction more difficult? And would you be able to summarize for us some of the pluses and minuses of these techniques in so much as you have looked at it in terms of reconstruction?
Jeremy Maitin-Shepard:
Yeah. So on a local level, the highest quality imaging certainly comes from focused ion beam scanning electron microscopy. But the challenge there is that since it only currently works on relatively small blocks, you then have these seams between blocks which tend to be where the most defects are. And so on the other hand, with these tape-based methods, you have larger defects from the cutting process and then also, there's more chance for alignment issues or other defects coming from the imaging process or potentially loss of sections during the tape collection. But you have the advantage that the imaging itself is non-destructive, and so there is a chance to correct for any problems from the imaging process itself. Whereas with these destructive imaging methods focused on beam scanning electron microscopy, you'll only have a single chance to imaging. If that process fails, there's no going back. So I would say yeah, certainly the highest quality reconstructions so far have come from the focused ion beam scanning electron microscopy data sets. But it's not... But that technique itself is not likely to scale to enormous volumes.
Robert McIntyre:
What I'd add to here is that while this stuff is interesting from the perspective of imagining how an uploading technology might work, it basically doesn't matter at all for the practice of preservation. The analogy here would be, again, we're in 1972 and…
Matt Angle:
Although I think it's worth from an information standpoint, similarly, you take a string of DNA and you freeze it at some temperature and you leave it for some amount of time and there is some statistical process still acting on that DNA. And so some small fraction of bases will become corrupted and then there's a question of well, we understand the code enough that a certain number of defects, we can still get the genome. And I think that's something that I'm interested in exploring with Jeremy, is understanding our air tolerance.
Jeremy Maitin-Shepard:
I think if Robert's shop is located in a geologically active area and there's an earthquake, it seems like there is a... I'm not super familiar with how robust these fluid aldehyde liquid nitrogen-frozen brains are, but I could imagine that they might crack or something and that would potentially be pretty problematic.
Robert McIntyre:
There's two classes of things that can generate errors. One of them matters and one of them is totally irrelevant. All of the stuff that happens after warming up a brain and then processing it that you currently have to deal with if you're trying to imaging samples, those don't matter as much. If our current techniques of resin embedding lose information, that's not relevant to the question of preservation because you use a different resin embedding technique in the future to process samples. What does matter is if there's information that's lost in the process of doing the preservation and stabilization of the brain today, so that is chemical fixation and then vitrification. Those are the things that matter.
So for example, saying you spoke of cutting errors, cutting the resin. But we don't cut brains. The brain is intact. The entire body's intact. And you would never process a sample that's been preserved unless you already had a valid uploading technique that had been used on thousands of contemporary humans that could consent to the process and that before had a multi-decade long tradition of being used successfully on animals. The same way that if I had a precious DNA sample from a species and one extinct in the '70s, I don't toss it in the garbage experimental machine that's as liable to destroy it as it is to read it. I wait for the first generation aluminum machine to use it on my sample that's precious.
Jeremy Maitin-Shepard:
You raised an interesting point, but why do you think that the contemporary humans from 200 years from now are going to be more willing to make themselves experimental subjects rather than looking at the back catalog of all of these guys from few hundred years ago and they have funds for–
Robert McIntyre:
Because they saw that the animals have been uploaded successfully using the technique. The same exact way we got space flight. We sent animals into space and then we sent people into space that were extraordinarily brave and very well knew that they had a very good chance of dying in the process, but they wanted to be the first ones. So there you go. The technology that would be used, it's just not going to look like modern electron microscopy preparation, sample preparation circa 2022. It's not going to have slicing problems. The problems it might have would be if information's lost during the fixation stage.
Matt Angle:
Do you agree with that statement? Because on one hand, there's this, I don't know, Silicon Valley optimism that everything gets better every two years. On the other hand, there's a physical approach to the brain. And I actually think the way people are looking at brains right now is a good way to look at brains. And I wonder, do we think that there's going to be a fundamentally different way of looking at the tissue that we fixed today? Do we think we'll be looking at it in a fundamentally different way 100 years from now, or do we think we're just going to get better at the way we're doing it now?
Kenneth Hayworth:
The two big split points I think are whether you would do it fundamentally destructively. By destructively, I mean whether it's a block-face approach or whether you section it and the sections don't be destroyed but you're looking at them. What you're really doing is you're removing all of the top materials so that you can get your instrument right in there to get high resolution. We have to do everything that way today. But a lot of people that talk about nanotechnology, the type of nanotechnology that really could get in and behave more like bacteria that's going around and sensing directly. Nobody has anything like that today. And to my knowledge, nobody has any real idea on how to make that, but I wouldn't want to dismiss it. And I could imagine that that might be a better technique. I think I answered that question.
Let me throw one thing out though that I think we might be missing. Jeremy hit upon this. I just want to highlight it. If there is a damage from anything that loses the information of where a tiny axon that's 100 nanometers in diameter is going, then you don't just lose that information. You lose the information of all of the synapses that might be 10,000 on the other side of that axon. And so this is the fundamental fragility of connectomics today. I have heard some people say that there might be a way around that even with the endogenous molecules and neurons that they use for detecting self from others. There's a class of molecules that might essentially barcode cells endogenously. But for what we assume right now, there is this tremendous fragility of the connectivity.
When Robert is talking about aldehyde fixation and aldehyde stabilized cryo-preservation and stuff like that, he is painting a picture of the best case scenario I think Robert and I are demanding that the world get to. It's like we need to be able to get to that best case scenario. But there are a lot of people out there that are saying, "Hey, who cares? Go and preserve after you're dead. Go and preserve with a technique that might not get adequate cryo-protectant everywhere." So there might be a few ice crystals or cracks somewhere, and all of a sudden that fragility at the preservation part really does make a big deal. I think we shouldn't lose that big picture that if the world wanted to, if the NIH wanted to preserve human brains at the level where Jeremy would look at them and say, "Oh, that is an awesome connectomics, easily traced," et cetera, and somebody else might say, "Oh, and those molecules are preserved where they should be," et cetera, et cetera. If the NIH wanted to do that, it could do it tomorrow and I think it should. So we can talk about it, but–
Matt Angle:
Jeremy, I want to put an analogy forth too like population imaging and electrophysiology of large neuronal ensembles. One of the things that people who are doing electrophysiology have noticed is that you may record from a large number of neurons, but often the system dynamics are describable using a much smaller number of variables. The number of latent variables to describe most of the power in the system is smaller than the number of neurons. We're all hopeful that the preservation will go well, that the reconstruction will go well, that once we understand the system, we could potentially have even a sparse connectome or an error contaminated brain sample that would still be salvageable. Help us understand. In connectomics, are people discovering latent variables? Are there better ways to describe these densely connected networks of neurons computationally?
Jeremy Maitin-Shepard:
I think it's reasonable to assume that you could lose. If you lose one neuron, you're still going to be fine.
Robert McIntyre:
We lose 10,000. We lose 100,000 a day by the way. Just while we've been talking, you're having a few die every few seconds, and with them go about 10,000 synapses.
Jeremy Maitin-Shepard:
Half of the neurons though that I think if you lost half of the neurons, there's a random half. I think probably there would be a lot of trouble with the reconstruction, certainly at the level of individual synapses. There seems to be we're done–
Matt Angle:
Well, the worst thing as you said earlier, correlated errors. So if you actually lost a random half of the neurons, you actually might not have the biggest problem. But if you lost a number of neurons that was meaningful, you only lost a couple columns from a particular cortical area, that would probably mean more than if you spread it out. Do we have the language even? Do we have the models to describe? I don't know if we have a cubic millimeter of tissue. Have we been able to do experiments where we drop data, we contaminate it, and we still reconstruct the original?
Jeremy Maitin-Shepard:
Well, you mean in the fruit fly, you have cases where there's only two of a given neuron in the entire brain. And so certainly if you lost one of those, you're losing quite a bit. And so it's hard to say in humans. But yeah, I think if you say that you lost half of all neurons, whether that would... I think it's plausible that the reconstruction would be poor in that case.
Robert McIntyre:
Aging alone, if you just look at the brain mass from when you're 25 versus when you're 80, you are already in some cases losing half of your brain mass. It's amazing how much you can lose though with that process and still not have it be catastrophic. If you deleted 10% of the transistors in a CPU, it would just fail to do anything. It wouldn't be declined. It would be catastrophically ruined.
Kenneth Hayworth:
I think that we could look at a lot of different neural network models of biological systems, models of the hippocampus, et cetera, models of the visual cortex, and you would get real answers to that. That would make sense. For example, I read a paper recently where they literally chopped off with a laser microdissector in a live animal and a live mouse dendritic branches and recorded how the tuning properties of that visual neuron responded when they chopped off 30% of its inputs. And not much. So there's redundancy at all sorts of different levels. But I think we might be missing the point a little bit because if I can ask Jeremy if instead of missing half the synapses or missing half of the neurons, if instead you put only 1% of the volume but you pulled that 1% of volume in random 1 micron cubes, which might happen from, I don't know, little ice crystals forming because of a bad preservation process, then would that from a connectomics point of view completely destroy the connectivity?
Jeremy Maitin-Shepard:
Right, yeah. So that sort of thing would definitely essentially disconnect all of the processes, and that would be catastrophic for the reconstruction.
Kenneth Hayworth:
And that's the kind of stuff that I think we should demand. For anybody that might be offering a preservation to somebody, they need to at least be able to meet the brain preservation prize criteria. They need to show tissue that Jeremy would work on without just throwing up his hands and saying, "This is hopeless. I'm not even going to try."
Matt Angle:
Jeremy, if you could give a billion dollar prize to the brain preservation or the connectomics or the dense reconstruction, the sectioning world, anyone you want, what would you give the billion dollar price to? What's the problem that you have in reconstruction that you need to go away?
Jeremy Maitin-Shepard:
The interesting thing is that we have... Yeah, there's been a lot of development with microscopy techniques. We currently have a lot of engineering challenges in producing a high-fidelity reconstruction. We can do the imaging but we can't yet completely automatically produce a high-fidelity reconstruction. But that seems to be heading in the right direction. There is this key challenge that is still in its infancy. It's actually turning whatever we've taken from the tissue into simulatable model. Until we've done that, we really have no real validation of the entire process. That's I think where I imagine a prize being a billion dollar prize potentially being relevant of saying, we want to be able to train a mouse or train a collection of mice in different behaviors, and then image them, reconstruct them, and stimulate them and observe these behaviors and distinguish these mice. And a challenge along those lines, I think, would really get to the heart of whether we've made significant progress in this whole brain preservation issue.
Matt Angle:
Jeremy, I've heard you say that before and I think that that is a really brilliant idea, but could you unpack that a little bit more for people who are listening? Give an example of what could happen that would really convince you that you had done a good reconstruction.
Jeremy Maitin-Shepard:
Okay, sure. As a concrete example, you could imagine you train whole some group of mice each into separate complex behaviors that would ideally involve several of the senses and different types of planning and use of memory. And then after preserving these mouse brains and then reconstructing them and simulating them, you would want to be able to observe the same behaviors that you've trained. You could imagine it being done in a blind way where you have some expert mouse trainer that invents novel trained behaviors for each of these mice that are unknown to the people performing the reconstruction. And so if you can distinguish the mice and observe the trained behaviors in each of them, that really would indicate that you've gone a long way in being able to do this complete preservation and reanimation.
I think even if you accomplished that, it may be that you've still only simulated this mouse at a rather coarse level in order to observe these trained behaviors. That maybe if you had then tried to do the same with a human, you would find that it's not a very faithful reproduction on the person, even though some coarse behaviors are correctly preserved. But it would still certainly be a pretty substantial validation of the approach and feasibility.
Matt Angle:
Is anyone doing this in a lower organism right now where we can actually reconstruct the organism?
Kenneth Hayworth:
Robert, you should address this, but let me say something beforehand. What Jeremy just laid out would be a fantastic experiment, but it is far beyond what we can do today. I think at one point, it was reconstruct the connectome of a mouse. Okay, that is already far beyond what we can do today. There are things that we can do today that are not as complete. Let me give you one example. Tony Zador, his group trained mice, I think it was mice, to lick in one direction if they heard a high tone and lick in a different direction if they heard a low tone. They could tell after the mouse was dead, and they sliced it and they took a slice out that they did. It was electrophysiology, but they really, what they were doing is they were counting synaptic connections into one area of the striatum versus a different area. And they could separate the mice into those that learned one direction versus those that learned a different direction.
It's a trivial memory, but it shows, and it's one of several examples of showing that yes, you can read off a learned behavior from connectivity. Have they done this with full animals? I think Robert has some other examples of C. elegans, perhaps?
Robert McIntyre:
Backing up one step here, what is the question again?
Matt Angle:
I was asking Jeremy what would be useful for him in doing his job in trying to... And initially I thought he was going to tell me, "I need better data. I need the knife to stop chattering when it's cutting or stop making strips." And he turned it around on me and said that it's actually not the front end right now that's limiting, but the fact that there's very little ability for him to validate whether his reconstruction is useful or not. He was actually throwing down the gauntlet for a pairing of behavior and reconstruction. I do see that as materially different. Obviously, I'm sure you can get some immediate early gene expression changes. You could lateralize gene expression on left or right or something like that. The Tony Zador example is cute, but it's not going to help Jeremy to figure out if reconstruction is good.
Kenneth Hayworth:
Well, I've been doing a deep dive into the literature on this, and the way I look at this is we have different models for how memories and learned behaviors are stored in the brain. What we should be looking for is experiments that get to the heart of whether those models are correct. We don't have to reconstruct a whole mouse and simulate its brain, and only if it runs the maze, the simulated maze correctly do we have faith in our models. We get evidence based on indirect evidence. And so it's not thinking through the problem clearly enough to say, "Gee, can we get a really high level behavior from this?" What we should be asking for is what is the evidence that our models of how memory is encoded are even in the right ballpark? Like I said, I mean I have just slews of papers, great labs that have done fundamental work on trying to prove or disprove our current theories of memory.
Matt Angle:
Ken, what would you ask the community for? It sounds like one of your asks is for NIH to put money into this. Okay. But is there a particular... If there were one technical advance that you're not in control of that someone else in a different discipline could do, what would you want?
Kenneth Hayworth:
I have two hats here. I have one hat that's like I'm like every other human being. We're going to die. And I would prefer a chance of waking up in a future that might be a better place. That's one hat. The other hat is I'm really, really, really interested in how the brain works. I spend my free time reading neuroscience papers. So I'm going to assume that you're asking on that side. From a techniques perspective, let me ask what you asked Jeremy, because I sent Jeremy some data. Our team sends him some data, and I feel horrible that there are things that are horrible about the data, things that are missing, gaps in between sections, and that it's such a small dataset and the resolution is limited.
So I think one of the things that would be great is to be able to have 10 by 10 by 10 nanometer datasets without any serious breaks in them from electron microscopy and over volumes that are on the order of a few cubic millimeters. If you get to that type of a thing, the artifacts that are being given to Jeremy are not bad enough to require a lot of human proofreading. And if they're large enough to see connectivity within region, let me give you a concrete example. The hippocampal region and the entorhinal region has these things called grid cells and place cells, and there is beautiful theories and beautiful electrophysiology showing that these are mapping out arbitrary spaces. It's really getting to a high level cognitive problem. And there are theories that say exactly how that should be connected. There is a recent paper that was saying if you were to look at the connectivity, you should see a toroidal, a tractor network. Although it gets much more complicated than even that.
If we had the ability to do connectomics at the several cubic millimeter scale in a way that didn't require a million hours of proofreading, then we could test that theory. If that theory turned out to be, "Oh, that really complicated cognitive function that has spatial maps that were learned, that is readable off in the connectome in a way that the theory suggests," that would go a long way to talking about the whole prospect of mind uploading hundreds of viewers in the future.
Matt Angle:
Robert, what's the one thing that would give you confidence that your perfectly preserved brains will be well reconstructed in the future? Could you pinpoint something that you think people should work on first?
Robert McIntyre:
My perspective is from an archival point of view. I'm confident that if the information is preserved, we'll be able to access it in the future. I'm not as concerned with the way technology works right now. I'm more concerned with the fundamental assumptions that I'm making in doing preservations. Because glutaraldehyde fixation is not literally stopping time for a brain. It does cause some transformations to happen.
There's a social part to this and then there's a practical, technical part to this. The social part is I wish neuroscientists in general would actually take the idea seriously. My assertion is that engram preservation is literally logically equivalent to neuroscience's idea of the synaptic basis of memory. To say that you can't preserve engrams is to propose an alternative theory of long-term memory that is wildly non-standard compared to what current neuroscience says is the reality. I would like to have a debate around this idea to at least have neuroscientists acknowledge that that is the truth. If people seriously think there's some alternative form of memory that is able to survive all of the many, many things that brains already go through without losing long-term memory but is for some reason catastrophically disrupted by glutaraldehyde fixation, I'd like to see serious proposals or I'd like to see neuroscientists just shut up about it and say that it works.
Matt Angle:
Exactly. Wanting to give–
Robert McIntyre:
And argue about the technology never coming around. But there you go.
Matt Angle:
Let me give, Jeremy, I want to give you the final word because I feel like we haven't heard enough from you. What should we think about this?
Jeremy Maitin-Shepard:
One of the things that always comes to mind when we think about brain preservation is that the archival issue is one thing, but you aren't really bringing a person back to life when you talk about simulation. You have this story representation, but there's not really... The concept of a person is tied to the fact that a person exists as only a single entity in one place. But when you talk about simulation, there's no constraints anymore on what actually is being done on this computer system. You can run multiple copies. This whole concept of a person, it doesn't really apply anymore. So I think that's one of the other challenges, ethical and philosophical challenge that comes with this.
Robert McIntyre:
So if it's ever possible to do that in the future, then humans already are the types of entities that's true of us. And so creating the uploading technology does not somehow redefine personhood. We just misunderstood personhood all along up to that point. And all of the things that we understand about empiricism and materialism say that people happen to be the types of entities that you could copy in principle. We have to address this already.
Me personally, I think of it being related more to causality than material. So if you have the same causal connection to something that you do between you going to sleep and waking up, I believe the entity that wakes up tomorrow is me because I can cause its behaviors. On that idea, it's perfectly fine to have a simulation. It's a debate, but it's in no sense settled or really, it's not obvious to me when you say that that's how persons are.
Matt Angle:
Jeremy?
Robert McIntyre:
And then it's not obvious in general with philosophers either.
Matt Angle:
Jeremy?
Jeremy Maitin-Shepard:
You just say that our current framework that killing a person is bad. A lot of our current ethical framework is all based on the physical constraints of being a person. And once you take those away, is it stopping the simulation? Modifying the parameters, is that an ethical problem? And so what's the value in running this simulation at all? You have a server room in the basement that has a bunch of connected simulations running. Is that better than just having a pile of rocks in there, especially if they aren't connected to any other computer systems? So I feel like you do run into these questions. It's true that we can ask them even without the technology being in place. We can also ignore them given that the technology is not in place.
Matt Angle:
Robert, Ken, Jeremy, thank you so much for your time.
Kenneth Hayworth:
Thank you.
Matt Angle:
Thank you for doing this. I really think this is going to be interesting to people.
Kenneth Hayworth:
Oh, thank you. Thank you for–
Robert McIntyre:
Nice being here.
Kenneth Hayworth:
And I want to say, Jeremy, I think that last part that you said was extremely deep and I had to restrain myself from... But you said it great. So it's like if you take this stuff seriously, then you have to open your mind to a larger understanding of what it is to be a person.
Matt Angle:
Or better yet, try not to think about it and just enjoy your coffee.
Kenneth Hayworth:
Yes.
Jeremy Maitin-Shepard:
Thanks, Matt.
Kenneth Hayworth:
I really enjoyed it.
Matt Angle:
Yeah, thank you. Thank you.