Alexandre Alahi is currently an Assistant Professor at EPFL. He spent five years at Stanford University as a Post-doc and Research Scientist after obtaining his Ph.D. from EPFL. His research enables machines to perceive the world and make decisions in the context of transportation problems and smart environments. He has worked on the theoretical challenges and practical applications of socially-aware Artificial Intelligence, i.e., systems equipped with perception and social intelligence. He was awarded the Swiss NSF early and advanced researcher grants for his work on predicting human social behavior. He won the CVPR Open Source Award (2012) for his work on Retina-inspired image descriptors, and the ICDSC Challenge Prize (2009) for his sparsity-driven algorithm that has tracked more than 100 million pedestrians to date. His research has been covered internationally by BBC, abc, PBS, Euronews, Wall street journal, and other national news outlets around the world. Alexandre has also co-founded multiple startups such as Visiosafe, and won several startup competitions. He was elected as one of the Top 20 Swiss Venture leaders in 2010.
Fields of expertise
Transportation and Mobility
Socially-aware Artificial Intelligence
Teaching & PhD
- Civil Engineering,
- Doctoral Program in Civil and Environmental Engineering
- Doctoral program in computer and communication sciences
- Doctoral program in robotics, control, and intelligent systems
- Doctoral Program in Electrical Engineering
Data science and Artificial Intelligence (AI) are poised to reshape the transportation industry with self-driving cars, delivery robots, self-moving segways, or smart terminals. In this course, students will learn the fundamentals behind these AI-driven s...
Research summaryWe work on the theoretical challenges and practical applications of socially-aware systems, i.e., machines that can not only perceive human behavior, but reason with social intelligence in the context of transportation problems and smart spaces.
We envision a future where intelligent machines are ubiquitous, where self-driving cars, delivery robots, and self-moving Segways are facts of everyday life. Beyond embodied agents, we will also see our living spaces – our homes, buildings, and cities – become equipped with ambient intelligence which can sense and respond to human behavior. However, to realize this future, intelligent machines need to develop social intelligence and the ability to make safe and consistent decisions in unconstrained crowded social scenes. Self-driving vehicles must learn social etiquette in order to navigate cities like Paris or Naples. Social robots need to comply with social conventions and obey (unwritten) common-sense rules to effectively operate in crowded terminals. For instance, they need to respect personal space, yield right-of-way, and “read” the behavior of others to predict future actions.
Our research is centered around understanding and predicting human social behavior with multi-modal visual data. Our work spans multiple aspects of socially-aware systems: from 1- collecting multi-modal data at scale, 2- Extracting coarse-to-fine grained behaviours in real-time, 3- designing deep learning methods that can learn to predict human social behavior in a fully data-driven way, to 4- integrating the developed methods in real-world systems such as a vehicle or a socially-aware robot that navigates crowded social scenes.