Sloan-Swartz Seminar in Theoretical Neurobiology
In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing millions of behaviorally relevant neurons. Dimensionality reduction has often shown that such datasets are strikingly simple; they can be described using a much smaller number of dimensions (principal components) than the number of recorded neurons, and the resulting projections onto these components yield a remarkably insightful dynamical portrait of circuit computation. This ubiquitous simplicity raises several profound and timely conceptual questions. What is the origin of this simplicity and its implications for the complexity of brain dynamics? Would neuronal datasets become more complex if we recorded more neurons? How and when can we trust dynamical portraits obtained from only hundreds of neurons in circuits containing millions of neurons? We present a theory that answers these questions, and test it using data from reaching monkeys. Overall, this theory yields a picture of the neural measurement process as a random projection of neural dynamics, conceptual insights into how we can reliably recover dynamical portraits in such incredibly under-sampled measurement regimes, and quantitative guidelines for the design of future large-scale experiments.
Refreshments will be served at 3:45 pm outside BBB 24.