Personalized, Learning-Enabled Autonomy for Human-Robot Collaboration

Interdisciplinary Areas: Autonomous and Connected Systems, Human-Machine/Computer Interaction, Human Factors, Human-Centered Design

Project Description

The fast-growing advances in AI algorithms and technologies are on track to yield immense opportunities for robots and more broadly autonomous agents to interact and collaborate with humans in a variety of applications, including healthcare, transportation, and disaster response. Current learning-based algorithms for sequential decision-making of such agents often require vast amounts of data and long training time to learn even a single task. Having humans in the loop, interacting with the agent at runtime with unknown preferences and behavioral patterns, exacerbates the challenges of data inefficiency and lack of generalizability while highlighting the need for performance guarantees.

With the vision of tackling these challenges, this project aims to develop personalized, learning-enabled, adaptive sequential decision-making algorithms that account for diverse human preferences and adapt to them online in human-robot collaborative settings. The design of these algorithms will leverage online reinforcement learning, along with human factors, for continually learning the human behavioral model and adapting to it. The theoretical and empirical performance of the algorithms will be rigorously analyzed under various conditions and in different robotics settings.

Start Date

Spring/Summer/Fall 2025 

Post-doc Qualifications

- Deep interest and strong background in reinforcement learning and robotics
- Familiarity with game theory, online learning, and human factors
- Ph.D. degree in Electrical and Computer Engineering, Computer Science, or related fields 

Co-Advisors

Mahsa Ghasemi, School of Electrical and Computer Engineering, mahsa@purdue.edu, https://mahsaghasemi.github.io/

Aniket Bera, Department of Computer Science, aniketbera@purdue.edu, https://www.cs.purdue.edu/homes/ab/

Bibliography

- C. Wirth, R. Akrour, G. Neumann, and J. Fürnkranz, "A Survey Of Preference-based Reinforcement Learning Methods,” Journal of Machine Learning Research (JMLR), 2017.

- J. D. Pascal, R. Zhang, and A. Bera, "Adaptive Planning with Generative Models under Uncertainty," International Conference on Intelligent Robots and Systems (IROS), 2024.

- M. Ahmed and M. Ghasemi, "Privacy-Preserving Decentralized Actor-Critic for Cooperative Multi-Agent Reinforcement Learning," International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.

- B. Rashmi, H. Viswanath, and A. Bera, "Trajectory Prediction for Robot Navigation using Flow-Guided Markov Neural Operator," IEEE International Conference on Robotics and Automation (ICRA), 2024.

- B. He, M. Ghasemi, U. Topcu, and L. Sentis, "A Barrier Pair Method for Safe Human-Robot Shared Autonomy," IEEE Conference on Decision and Control (CDC), 2021.