Rigorous Systems Research Group (RSRG) Seminar
We live in the prolific age of artificial intelligence and machine learning. These automation technologies underlie real systems (e.g. robots, and self-driving vehicles), and virtual systems (e.g. financial, and inventory management). The problem is many of these autonomous systems have become so intricate and black-box that we hit a complexity roadblock. For example, it can be difficult to tell why a classifier or a recommendation engine based on machine learning works. Moreover, when the algorithms work, how can we quantify their limitations, safety, privacy and performance with guarantees. In this talk, I borrow notions from control and information theories to address two challenges in autonomy. The first one is motivated by the Mars 2020 project and is concerned with navigation of an autonomous agent in an uncertain environment (modeled by a Markov decision process) subject to communication and sensing limitations (in terms of transfer entropy), and high-level mission specification (characterized by linear temporal logic formulae). The second one is concerned with belief verification in autonomous systems (represented by a partially observable Markov decision process) with applications in privacy verification of autonomous systems (e.g. a robot) operating on shared infrastructure, and machine teaching.