skip to main content
Caltech

Rigorous Systems Research Group (RSRG) Seminar

Monday, February 24, 2020
2:00pm to 3:00pm
Add to Cal
Annenberg 121
Distributed decision making in networked systems: from optimization to reinforcement learning
Guannan Qu, Resnick Sustainability Institute Postdoctoral Scholar, Computing & Mathematical Sciences, Caltech,

Cyber-physical systems such as the power grid, Internet of Things (IoT), and transportation systems are increasingly adding large numbers of devices with sensing capabilities. This results in an explosion of data, and calls for a rethinking of traditional control/optimization theory, especially how to incorporate learning techniques to use the data for control/optimization purposes, as well as how to cope with the challenges caused by the sheer scale of the network. In the first part of the talk, we focus on distributed optimization, where the nodes seek to minimize a global loss function, which is a sum of their local loss functions formed by local data sets. This is a classical setting and our results provide the fastest known gradient-based distributed algorithm. In the second part, we go beyond static optimization and investigate how data can be directly used to design control policy. Particularly, we study reinforcement learning (RL) for localized control of networked systems. Despite its wide-ranging successes, the application of RL in the multi-agent systems has proven to be challenging due to scalability issues. Harnessing the network structure, we develop a Scalable Actor-Critic framework to learn an optimal local policy in a scalable manner. This result represents the first approach that provably addresses the issue of scalability in the context of multi-agent RL.

For more information, please contact Yu Su by email at [email protected].