Resilience and Learning for Large-Scale Multi-Agent Autonomy
|Interdisciplinary Areas:||Internet of Things and Cyber Physical Systems, Data/Information/Computation, Security and Privacy
Multi-Agent Systems (MAS) are collections of autonomous agents, where each agent has sensing, computation, and decision-making capabilities. By working as a cohesive whole, large-scale MAS can achieve complex missions that are well beyond the capabilities of individual systems, such as exploration of the unknown, monitoring a large city, and search and rescue. Different from systems with a centralized coordinator, large-scale MAS operate in a distributed way, and achieve global objectives through only local coordination among neighboring agents. On one hand, MAS equipped with such distributed algorithms are inherently robust against individual agent/link failures; on the other hand, the high dependence on local coordination also raises a major concern that the whole system can be compromised by cyber-attacks on one or more vulnerable agents, especially in adversarial environments. The objective of this project is to establish cutting-edge algorithms and design principles for resilient MAS that are able to dynamically (in real-time) anticipate and react to internal and external threats in order to accomplish the specified mission. In particular, a core component of the research will focus on the use of learning, which enables the system to adapt its actions to environmental and adversarial conditions and improve its performance over time.
August 15, 2020
Solid mathematical skills and background in relevant areas, such as networks, control, optimization, or learning; Passion and interest to solve challenging research problems using methodologies from different areas; Good communication and writing skills (English); Ability to thrive in a collaborative environment.
School of Aeronautics and Astronautics
X. Wang, S. Mou, S. Sundaram. A Resilient Convex Combination for consensus-based distributed algorithms. Numerical Algebra, Control and Optimizations, 9(3), 269-281, 2019.
S. Sundaram, B. Gharesifard. Distributed optimization under adversarial nodes. IEEE Transactions on Automatic Control. 64(3), 1063-1076, 2019.
A. Mitra, S. Sundaram. Byzantine-resilient distributed observers for LTI systems. Automatica, 108, 2019.
A. Mitra, J. A. Richard, S. Bagchi, S. Sundaram. Resilient distributed state estimation with mobile agents: overcoming Byzantine adversaries, communication losses, and intermittent measurements. Autonomous Robots, 43(3), 743-768, 2019.
P. C. Heredia, S. Mou. Distributed Multi-Agent Reinforcement Learning by Actor-Critic Method, Proceedings of the 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems, 2019.