-
Using Theory to Reveal Function in Large Brain Circuits
Wed, May 03, 2017 @ 10:00 AM - 12:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Friedrich Sommer, UC Berkeley
Talk Title: Using Theory to Reveal Function in Large Brain Circuits
Abstract: Current technology provides a virtual deluge of information about brain structure and physiology. Our laboratory focuses on developing new theoretical frameworks and analytical methods that take advantage of this accelerated rate of data influx to address central problems in neuroscience. I will discuss three different projects.
High-density multi-electrode recordings monitor the spike trains of individual neurons with unparalleled temporal accuracy and also provide spatially distributed information about local field potentials (LFPs), slow signals generated by groups of neurons. In hippocampus, the relative timing between the spikes of a certain class of neurons (place cells) and a 10 Hz signal present in the LFP (the theta wave) carries information about the animal's position in the environment. Using data obtained in the Buzsaki laboratory, we developed a novel approach to decode the animal's position precisely from the LFP alone. Further, we were able to extract LFP place components, like place cells, neatly tile the spatial environment. The LFP is far simpler to record than spike trains, and is feasible to obtain from human patients. Thus, our results can be leveraged to build robust brain computer interfaces.
Integration of information across regions and modalities is a fundamental working principle of the brain. We developed a novel method to estimate integrated information. The method can be applied to recordings with large numbers (thousands) of channels. We recently provided the first estimate of integrated information in a whole animal, the behaving nematode (C-elegans). Further, we found that the mesoscopic mouse connectome integrates significantly more information than other network architectures, suggesting that integrated information is a plausible force for driving evolution.
Theoretical principles, such as Hebbian plasticity, error-based, and reward-based learning give insight into how the brain form sensory codes, object categories, and develop strategies to obtain rewards. However, we lack principles to understand how the brain guides the body to explore the environment efficiently such that it is possible to form models of the world from small numbers of observations. We proposed a novel principle that selects actions leading to the sensory observations that best improve the current model of the environment. This principle can be cast in a formal framework based on defining the information gain of the model. The resulting algorithm generates models of novel environments with greater speed than previously achieved. On one hand, the new principle generates testable predictions about how brains control action/perception loops, on the other it has technical applications in robotics and artificial intelligence.
Biography: Friedrich T. Sommer holds a Ph.D. in Physics from the University of Dusseldorf and a habilitation in Computer Science from the University of Ulm. After completing postocdoctoral work at MIT and the University of Tuebingen, he joined the department of Computer Science at the University of Ulm in 1998 as an Assistant Professor. He became a Principal Investigator at the Redwood Neuroscience Institute in Menlo Park in 2003 before joining the University of California, Berkeley in 2005, where he is an Adjunct Professor at the Redwood Center for Theoretical Neuroscience and the Helen Wills Neuroscience Institute.
Host: Shrikanth Narayanan & Richard Leady
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132
Audiences: Everyone Is Invited
Contact: Tanya Acevedo-Lam/EE-Systems