Laboratory of Computational Neuroscience
Fields of expertise
MissionAs director of the Laboratory of Computational Neuroscience LCN at the EPFL, Wulfram Gerstner conducts research in computational neuroscience with special emphasis on models of spiking neurons, spike-timing dependent plasticity, and reward-based learning in spiking neurons. The questions on learning in spiking neurons are linked to the problem of neuronal coding in single neurons and populations. His teaching concentrates on learning in formal models and biological systems.
BiographyWulfram Gerstner is Director of the Laboratory of Computational Neuroscience LCN at the EPFL. His research in computational neuroscience concentrates on models of spiking neurons and spike-timing dependent plasticity, on the problem of neuronal coding in single neurons and populations, as well as on the link between biologically plausible learning rules and behavioral manifestations of learning.
He teaches courses for Physicists, Computer Scientists, Mathematicians, and Life Scientists at the EPFL.
After studies of Physics in Tübingen and at the Ludwig-Maximilians-University Munich (Master 1989), Wulfram Gerstner spent a year as a visiting researcher in Berkeley. He received his PhD in theoretical physics from the Technical University Munich in 1993 with a thesis on associative memory and dynamics in networks of spiking neurons. After short postdoctoral stays at Brandeis University and the Technical University of Munich, he joined the EPFL in 1996 as assistant professor. Promoted to Associate Professor with tenure in February 2001, he is since August 2006 a full professor with double appointment in the School of Computer and Communication Sciences and the School of Life Sciences. Wulfram Gerstner has been invited speaker at numerous international conferences and workshops. He has served on the editorial board of the Journal of Neuroscience, `Network: Computation in Neural Systems', `Journal of Computational Neuroscience', and `Science'.
|F. Zenke, and E.J. Agnes, and W. Gerstner
Nature Communications, 6:6922, 2015.
|Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks|
|G. Hennequin, T. Vogels and W. Gerstner
|Optimal Control of Transient Dynamics in Balanced Networks Supports Generation of Complex Movements|
|D.J. Rezende and W. Gerstner,
Frontiers In Computational Neuroscience, 8:38, 2014.
|Stochastic variational learning in recurrent spiking networks,|
|T. Vogels, H. Sprekeler, F. Zenke, C. Clopath and W. Gerstner
Science 334: 1569-1573, 2011
|Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks|
|C. Clopath, L. Busing, E. Vasilaki and W. Gerstner
Nature Neuroscience, 13: 344-352, 2010
|Connectivity reflects coding: a model of voltage-based STDP with homeostasis|
list (1992 - )
|Full list of publications|
Teaching & PhD
Life Sciences Engineering
PhD StudentsBarry Martin Louis Lucien Rémy, Becker Sophia, Colombo Florian François, Gastaldi Chiara, Iatropoulos Georgios, Illing Bernd Albert, Liakoni Vasiliki, Modirshanechi Alireza, Schmutz Valentin Marc, Simsek Berfin, Sourmpis Christos, Stanojevic Ana,
Past EPFL PhD StudentsArleo Angelo , Badel Laurent , Bathellier Brice , Borer Silvio , Chavarriaga Lozano Ricardo Andres , Clopath Claudia , Corneil Dane Sterling , Faraji Mohammadjavad , Frémaux Nicolas , Gerhard Joao Emanuel Felipe , Gers Felix , Gozel Olivia , Hennequin Guillaume , Herrmann Scheurer Alix , Jimenez Rezende Danilo , Jolivet Renaud , Lehmann Marco Philipp , Luksys Gediminas , Marcille Nicolas , Mayor Julien , Mensi Skander , Meylan Laurence , Moerland Perry , Muscinelli Samuel Pavio , Naud Richard , Pfister Jean-Pascal Théodor , Pozzorini Christian Antonio , Seeholzer Alexander Kevin , Setareh Hesam , Sheynikhovich Denis , Smeraldi Fabrizio , Sottas Pierre-Edouard , Spiridon Paltani Mona , Stein Naves de Brito Carlos , Strösslin Thomas , Tomm Christian , Zenke Friedemann , Ziegler Lorric ,
- Simple perceptrons for classification
- Reinforcement Learning 1: Bellman equation and SARSA
- Reinforcement Learning 2: variants of SARSA, Q-learning, n-step-TD learning
- Reinforcement Learning 3: Policy gradient
- Deep Networks 1: BackProp and Multilayer Perceptrons