Safe and Resilient Teaming with Human-Aware Autonomy
Apollo 11 Postdoctoral Fellowships at Purdue - Proposal
Inseok Hwang, Professor
Research Goals
Well-designed Human-Autonomy Teaming (HAT) enables effective collaboration between humans and AI-driven autonomy, as opposed to merely assigning the task based on each entity’s capabilities. In such teaming, autonomy can participate in given tasks as a human counterpart, allowing the team to obtain better performance compared to human-alone or autonomy-alone. Despite the recent advancements in AI, humans still play an important role in HAT through their superior cognitive abilities to guide autonomy, so that the human-autonomy team can conduct the given task beyond designed boundaries. Thanks to these advantages, HAT has gained significant attention recently, especially in Advanced Air Mobility (AAM), Urban Air Mobility (UAM), and UAS Traffic Management (UTM) in civil applications and Manned-UnManned Teaming (MUM-T) in military applications.
The major challenges of designing HAT come from 1) complicated interactions between humans and autonomy that is affected by intricate human cognitive behaviors and 2) safe and resilient decision-making of AI considering teamwork. For effective HAT, autonomy needs to understand human behavior and infer cognitive states to cooperate closely with human teammates, i.e., one needs a human-aware HAT. From a safety perspective, autonomy should also be able to distinguish the different semantic contexts of human behavior depending on the underlying cognitive state. For example, a human might perform a seemingly dangerous action to avoid a greater threat, supported by emotional stability and logical reasoning. On the other hand, some behavior that appears reasonable might result from distractions or errors. The same issue applies to humans. The low interpretability and unexpected behavior (e.g., hallucination) might harm trust, prohibiting effective collaboration. Improving the generalizability of AI is also an important factor for HAT. AI should respond to human demand adaptively and flexibly by leveraging its learning capability. In other words, it must be able to account for complex or unknown system dynamics in unseen situations quickly and safely according to human intent. Several existing approaches to achieving such adaptability, such as knowledge transfer, transfer learning, safe reinforcement learning, and few-shot learning, represent crucial directions for advancing HAT research.
Motivated by the above discussion, this project aims to develop a safe and resilient human-aware HAT framework based on both control theory and AI approaches, thereby fully facilitating theoretical rigor of control theory as well as the uncertainty resilient and learning characteristics of AI. The proposed human-aware HAT is expected to enhance team performance and safety by accurately interpreting human intent, inferring cognitive states, and responding accordingly rather than merely following human commands. Furthermore, the human-aware autonomy can share its inner mental model between humans and autonomy to increase situation awareness and transparency among the teammates. The project consists of the following, but not limited to, three thrusts:
- Human-aware autonomy teammate: We aim to develop autonomy that can accurately account for human decision-making processes and control policies affected by human cognitive states. This will be achieved by constructing a reliable and accurate data-driven model using physiological sensor measurements, which will help identify the correlations between human decision-making and cognitive states. The resulting model will then be integrated into the HAT framework, enabling the autonomy counterpart to collaborate effectively with humans by understanding and predicting their behavior.
- Large Language Model (LLM)-based transparent interaction: LLMs have strength in Natural Language Processing (NLP) and thus, have been applied for human-robot communication. However, despite their rapidly increasing performance, the formally verifiable safety assurance of LLM’s behavior remains a crucial challenge, especially for safety critical aerospace systems. The objective of thrust 2 is to establish an architecture that leverages LLMs for transparent interaction for HAT. The postdoctoral fellow is expected to contribute to developing LLM models (or algorithms) that can provide a formal behavior guarantee on LLM’s interaction with human counterparts, along with the confidence measure in its own. Consequently, thrust 2 will contribute to improving communication transparency by explicitly sharing the risks associated with LLM interactions, thereby preserving the trust of autonomy as a teammate in HAT missions.
- Safe and resilient decision-making of AI: Based on thrusts 1 and 2, thrust 3 aims to design a novel architecture for human-aware autonomy that transparently communicates with humans via LLMs and understands humans through the data-driven models from thrust 1, thereby truly acting as a teammate of humans with safe and resilient decision-making process. The architecture also aims for improving generalizability, enabling the system to adapt effectively to unseen or new environments while considering human counterparts and their interactions. Accordingly, the developed HAT framework would be resilient to unexpected situations and diverse human demands, realizing genuine teamwork with humans rather than merely functioning as automation.
Our lab has been dedicated to High Assurance Autonomy for Cyber-Physical Systems (CPS) with applications to safety critical aerospace systems. I plan to expand the research portfolio to Cyber-Physical-Human Systems (CPHS) via integration of control theory, human cognitive engineering, and AI, leading to safe and resilient human aware HAT. The postdoctoral fellow is expected to develop formally verifiable, safety-assured AI for a safe and resilient HAT framework that can understand humans deeper than ever before. This will bring novel research outcomes with potential applications to not only AAM/UAM/UAS but also the military and space domain for safer and more effective HAT. Furthermore, such a contribution would also be generalized to other related fields, such as autonomous cars or industrial robots, thereby spreading the benefit and impact of human-aware HAT in broader communities.
Expected Deliverables
The postdoctoral fellow is expected to develop a unique, cutting-edge research capability on safe and resilient human-aware HAT that can contribute to effective teamwork between humans and autonomy. Based on this, the postdoctoral fellow is expected to produce major publications in related top-tier journals, such as AIAA Journal of Guidance, Control, and Dynamics (JGCD) and IEEE Transactions on Aerospace and Electronic Systems (TAES) in aerospace engineering as well as IEEE Transactions on Robotics (T-RO) and IEEE Robotics and Automation Letters (RA-L) in robotics.
Additionally, the postdoctoral fellow is expected to present at conferences, such as AIAA SciTech, AIAA Aviation, AIAA/IEEE Digital Avionics Systems Conference (DASC), IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Conference on Neural Information Processing Systems (NeurIPS), and International Conference on Machine Learning (ICML).
The postdoctoral fellow is also expected to develop a numerical simulation platform, which could potentially be extended to human subject experiments, along with open-source software that can help other colleagues contribute to related research. Such a contribution would possibly facilitate new funding opportunities in emerging, interdisciplinary research topics of human-aware HAT. In addition, the postdoctoral fellow is expected to contribute to the development of new research proposals.
Affiliated Faculty
Paul Stanley Professor Inseok Hwang
School of Aeronautics and Astronautics, Purdue University