BiographyMartin's research focuses on a computational understanding of the neural mechanisms underlying natural intelligence in vision and language. To achieve this goal, he bridges Deep Learning, Neuroscience, and Cognitive Science, building artificial neural network models that match the brain’s neural representations in their internal processing and are aligned to human behavior in their outputs.
He completed his PhD at the MIT Brain and Cognitive Sciences department advised by Jim DiCarlo with collaborations with Ev Fedorenko and Josh Tenenbaum, following Bachelor’s and Master’s degrees in computer science at TUM, LMU, and UNA. His previous work includes research in human-like vision at Harvard with Gabriel Kreiman, natural language processing reinforcement learning at Salesforce with Richard Socher, as well as several other projects in industry. Martin also co-founded two startups. Among others, his work has been recognized in the news at Science magazine, MIT News, and Scientific American.
Martin's work has been published at top journals including Neuron and the PNAS as well as leading machine learning venues such as NeurIPS and ICLR where his papers are routinely selected for Oral and Spotlight presentations (0.5% acceptance rate). He has received numerous awards and honors for his research, including the Neuro-Irv and Helga Cooper Open Science Prize, the McGovern fellowship, the Walle Nauta Award for Continuing Dedication in Teaching, the Takeda fellowship in AI Health, the German Federal scholarship, and the MIT Singleton and Shoemaker fellowships. With his startup Integreat, he was a finalist in the Google.org Impact Challenge and won the TUM Social Impact Award, and the Council of Europe's Youth Award.
Brain and Cognitive Sciences
2017 - 2022
TUM & LMU & UNA
2014 - 2017
2011 - 2014
|Kubilius, J.; Schrimpf, M.; Hong, H.; Majaj, N. J.; Rajalingham, R.; Issa, E. B.; Kar, K.; Bashivan, P.; Prescott-Roy, J.; Schmidt, K.; Nayebi, A.; Bear, D.; Yamins, D. L. K.; DiCarlo, J. J.
NeurIPS 2019 (oral)
|Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs|
|Schrimpf, M.; Kubilius, J.; Lee, M. J.; Ratan Murty, N. A.; Ajemian, R.; DiCarlo, J. J.
|Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence|
|Schrimpf, M.; Blank, I. A.; Tuckute, G.; Kauf, C.; Hosseini, E. A.; Kanwisher, N.; Tenenbaum, J. B.; Fedorenko, E.
|The Neural Architecture of Language: Integrative Modeling Converges on Predictive Processing|
|Dapello*, J.; Kar*, K.; Schrimpf, M.; Geary, R.; Ferguson, M.; Cox, D. D.; DiCarlo, J. J.
ICLR 2023 (notable top-5%)
|Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness|
|Bagus, A. M. I. G.; Marques, T.; Sanghavi, S.; DiCarlo, J. J.; Schrimpf, M.
NeurIPS SVRHM 2023
|Primate Inferotemporal Cortex Neurons Generalize Better to Novel Image Distributions Than Analogous Deep Neural Networks Units|
|Tuckute, G.; Sathe, A.; Srikant, S.; Taliaferro, M.; Wang, M.; Schrimpf, M.; Kay, K.; Fedorenko, E.
|Driving and Suppressing the Human Language Network Using Large Language Models|
|Dapello*, J.; Marques*, T.; Schrimpf, M.; Geiger, F.; Cox, D. D.; DiCarlo, J. J.
NeurIPS 2020 (spotlight)
|Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations|