Resilience and Learning for Large-Scale Multi-Agent Autonomy

Interdisciplinary Areas: Data and Engineering Applications, Autonomous and Connected Systems, Security and Privacy

Project Description

Multi-Agent Systems (MAS) are collections of autonomous agents, where each agent has sensing, computation, and decision-making capabilities. By working as a cohesive whole, large-scale MAS can achieve complex missions that are well beyond the capabilities of individual systems, such as exploration of unknown environments, monitoring a large city, and search and rescue. In contrast to systems with a centralized coordinator, large-scale MAS achieve global objectives through local coordination among neighboring agents. On one hand, MAS equipped with such distributed algorithms are inherently robust against individual agent/link failures; on the other hand, the dependence on local coordination also raises the possibility that the entire system can be compromised by cyber-attacks on one or more vulnerable agents, especially in adversarial environments. 
The objective of this project is to establish cutting-edge algorithms and design principles for resilient MAS that are able to dynamically (in real-time) anticipate and react to internal and external threats in order to accomplish the specified mission. In particular, a core component of the research will focus on the use of learning, which enables the system to adapt its actions to environmental and adversarial conditions and improve its performance over time.
Start Date
Postdoc Qualifications
Solid mathematical skills and background in relevant areas, such as networks, control, optimization, or learning. 
Passion and interest to solve challenging research problems using methodologies from different areas.
Good communication and writing skills. 
Ability to thrive in a collaborative environment.
Shaoshuai Mou, , School of Aeronautics and Astronautics,
Shreyas Sundaram,, School of Electrical and Computer Engineering,
Co-Directors, Center for Innovation in Control, Optimization, and Networks (ICON):
*Y. Xie, S. Mou, and S. Sundaram, Towards Resilience for Multi-Agent QD Learning, Arxiv, 2021.
*X. Wang, S. Mou, S. Sundaram. A Resilient Convex Combination for consensus-based distributed algorithms. Numerical Algebra, Control and Optimizations, 9(3), 269-281, 2019.
*S. Sundaram, B. Gharesifard. Distributed optimization under adversarial nodes. IEEE Transactions on Automatic Control. 64(3), 1063-1076, 2019.
*A. Mitra, J. A. Richard, S. Bagchi, S. Sundaram. Resilient distributed state estimation with mobile agents: overcoming Byzantine adversaries, communication losses, and intermittent measurements. Autonomous Robots, 43(3), 743-768, 2019.
*P. C. Heredia, S. Mou. Distributed Multi-Agent Reinforcement Learning by Actor-Critic Method, Proceedings of the 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems, 2019.