CMI Seminar: Guannan Qu
We consider the distributed optimization problem over a network of agents, where the objective is to optimize a global function formed by a sum of local functions, with each agent only allowed to use local information and local communication with neighbors. This is a problem of fundamental interest in multi-agent systems and has found various applications in sensor networks, multi-robot systems, distributed machine learning, etc. To solve this problem, various distributed gradient methods have been proposed, but in this talk, we show there is a gap in convergence rate between the existing distributed gradient algorithms and the optimal centralized gradient algorithm. We then develop a gradient tracking technique that results in two algorithms that can partially bridge the gap. In particular, one of our algorithms achieves Nesterov-type acceleration in the realm of distributed optimization for the first time.
Based on joint work with Na Li.