Skip navigation

Research

Multi-Agent Control, Optimization and Learning

By working as a cohesive whole, multiple autonomous agents usually offers better autonomy in complicated missions such as search and rescue, exploration of the unknown, monitoring a large area, etc, that are well beyond the capability of individual autonomous systems. The main challenge to coordinate multi-agent systems comes from the absence of a global controller. This gives rise to distributed algorithms, which achieve global objectives through only local coordinations among nearby neighbor agents. Along this direction, we develop

  • Distributed algorithms for multi-agent formation control including rigid formations (undirected & directed) and flexible formation, in 2D and 3D, with global stability.
  • Distributed algorithms for solving linear equations (i.e. linear regression) including least-square solutions and solutions with minimum L1 norm.
  • Distributed algorithms for consensus-based multi-agent optimization with exponential stability
  • Distributed algorithms for task allocation in complex environment with unknowns and dynamics.
  • Methods for convergence analysis of multi-agent actor-critic reinforcement learning, multi-agent Q-learning, multi-agent TD learning.

Keywords: Distributed Optimization; Multi-Agent Reinforcement Learning; Distributed Task Allocation.

Human-Autonomy Teaming

Although autonomous systems are able to learn, make decisions, and act intelligently, human's expertise are usually valuable in complicated scenarios, and learning from human demonstrations are also one efficient way to program autonomy. Challenges for human-autonomy teaming come from that human inputs are usually sparse, vague and much slower than autonomy. In this direction, we develop:

  • Method based on inverse optimal control (IOC) to learn objective functions from limited human data, which combined IOC with optimal control serves as a nice predictor for human motion. The proposed techniques matches very well with the real human motion data.
  • End-to-end learning framework for autonomy to learn from sparse human demonstrations, which is validated by guiding a quadrotor to pass through two windows base don sparse way points from human operator.
  • Algorithm for autonomy to interact with human, in which human gives directional corrections incrementally only when autonomy is not satisfactory.   

Keywords: Inverse Optimal Control, Inverse Reinforcement Learning, Sparse Human Inputs, Human-Machine Interactions.

Resilient Autonomy in Multi-Agent Swarms

Multi-agent swarms equipped with distributed algorithms are inherently robust against individual agent/link failures. However, the dependence on local coordination among nearby neighbors also make the swarm vulnerable in the presence of even one malicious agents. This has motivated us to achieve resilience which guarantees system's performance even in hostile environment. Challenges for resilience in multi-agent swarms comes from the fact that each agent is low-cost with limited capability of onboard sensing/communication/processing while the cyber-attacks could be intelligent and  launched simultaneously in multiple locations. Along this direction, we develop

  • Systematic approaches to achieve automated resilience against Byzantine attacks without identification or isolation, especially for consensus-based distributed algorithms.
  • Methods to achieve resilience for multi-agent QD-learning

Experimental Research in Multi-Vehicle Coordination

We have also been interested in implementations of advanced control algorithms into multi-vehicles coordination.  

 

 

Page Counter