Nirupam Gupta
EPFL IC IINFCOM DCL
INR 316 (Bâtiment INR)
Station 14
1015 Lausanne
Web site: Web site: https://dcl.epfl.ch/
Education
Ph.D.
Mechanical Engr.
University of Maryland - College Park
2013 - 2018
B.Tech.
Electrical Engr.
Indian Institute of Technology - Delhi
2009 - 2013
Research
Byzantine Resilience with Privacy
How to combine the guarantees of Byzantine resilience with Differential Privacy in Distributed Machine Learning? It may sound like a simple task of plugging the respective in unison. However, the problem is far from being this simple. Essentially, the enforcement of Differential Privacy breaks the guarantees of existing Byzantine resilience techniques. And so the challenge ensues.Fault-Tolerant Distributed Optimization
The problem of distributed (or collaborative) optimization in multi-agent systems has gained significant attention in recent years, due to its wide applicability in distributed learning, internet-of-things, and cyber-physical systems such as swarm robotics, smart power-grids, etc. In this problem, the system comprises of multiple agents (or nodes), and each agent has a local cost (or loss) function. In the fault-free setting, all the agents are non-faulty (or honest), and the goal is to design a distributed algorithm that enables the agents to compute a minimum of the aggregate of their local cost functions. However, we consider a setting where some agents may be Byzantine faulty. In that case, the objective is to allow all the non-faulty agents compute a minimum of the non-faulty aggregate cost functions.Accelerating Distributed Gradient-Descent
The Gradient-descent method is perhaps the oldest iterative algorithm we know, thanks to Cauchy who supposedly proposed it in 1847. This method, however old it may be, is still just as relevant, if not more, as it was in the second half of the 19th century. For instance, our modern-day machine learning and artificially intelligent systems rely heavily on this simple and elegant iterative method. However, the rate of convergence of this iterative method is constrained by conditioning of the optimization problem being solved. In short, the method requires a large number of iterations to converge (to a solution) if the problem being solved is ill-conditioned. More importantly, the implementation of the gradient-descent method on a digital machine that could only process and register real values with bounded precision is quite unstable when the optimization problem being solved is ill-conditioned.The goal of this project is to design mechanisms that alleviate the impact of conditioning on the convergence of gradient-descent method, which should be applicable to distributed framework.