Control Meets Learning Seminar
A major challenge in control is that current actions have an effect on future performance, whereas the future (in terms of measurements, disturbance signals, etc.) is unknown. To date, two major approaches to deal with future uncertainty have been studied by control theorists: stochastic control (such as LQG) where the statistical properties of the signals are assumed to be known and average future performance is optimized, and robust control (such as H-infinity) where the worst-case future performance is optimized. Stochastic control is known to be sensitive to deviations from the assumed statistical model, and robust control is known to often be too conservative because it safeguards against the worst-case. Motivated by learning theory, as a criterion for controller design we propose to use regret, defined as the difference between the performance of a causal controller (that has only access to past and current disturbances) and that of a clairvoyant controller (that has also access to future disturbances). The resulting controller has the interpretation of guaranteeing the smallest possible regret compared to the best non-causal controller, no matter what the disturbances are. In the full-information LQR setting, we show that the regret-optimal control problem can be reduced to the classical Nehari problem. We obtain explicit formulas for the optimal regret and for the regret-optimal controller, which turns out to be the sum of the classical $H_2$ state-feedback law and an $n$-th order controller (where $n$ is the state dimension of the plant).. Simulations over a range of plants demonstrates that the regret-optimal controller interpolates nicely between the $H_2$ and the $H_\infty$ optimal controllers, and generally has $H_2$ and $H_\infty$ costs that are simultaneously close to their optimal values. The regret-optimal controller thus presents itself as a viable option for control system design.
We will also discuss ramifications and generalization of the results.