Episode
4

Dimensionality Reduction for Neural Recordings

February 18, 2021
Back to all

In this episode, friend of the podcast Vikash Gilja reprises his role as Vikash Gilja. We are also joined by Konrad Kording, Chethan Pandarinath, and Carsen Stringer. We talk about how dimensionality reduction is used to better understand large scale neural recordings. This episode is fairly technical, but it contains many great references if you are interested in learning more. We open with a brief explainer video by Paradromics’ own Aditya Singh.

00:40 | Dimensionality Intro

04:42 | Podcast Start

07:50 | Janelia Research Campus

08:56 | Translational Neuroengineering Lab

09:35 | Stanford Neural Prosthetics Translational Lab

10:10 | Shenoy Lab

12:00 | Deep Brain Stimulation

12:57 | Chethan’s work on retinal prosthetics

15:00 | Immunology

15:20 | Jonathan Ruben

15:30 | Byron Yu

15:41 | Gatsby Computational Neuroscience Unit

18:00 | Joshua Tenenbaum

18:30 | Kording Lab at UPenn

18:46 | Neuromatch Academy

19:47 | Neuromatch Academy Q&A

21:21 | Dimensionality reduction for neural recordings

26:22 | The Curse of Dimensionality

30:11 | Principal Component Analysis

32:20 | Neural Firing as a Poisson Process

33:13 | Shared Variance Component Analysis

35:18 | Cross validation in large scale recording

38:29 | A theory of multineuronal dimensionality

39:10 | Random projections explained with visuals

42:24 | Correcting a reductionist bias

48:30 | Noise Correlations

49:35 | More on Noise Correlations

57:40 | LFADS

01:01:51 | What is a stationary process?

01:06:02 | Inferring single-trial neural population dynamics

01:06:46 | Task Specificity

01:07:28 | Lee Miller

01:08:18 | “I don’t know, I might be wrong”

01:13:16 | Neural Constraints on Learning

01:15:00 | A recent exciting paper from Yu and Batista Labs

01:19:01 | Hume on Causation

Have any questions or want

to learn more?

Reach out to our team, and we'll be in touch.