Print this page

Collaborative Research: Synthesis of User Interfaces for Collaborative Systems in Uncertain Environments

Project Objective:

Problems in human-automation interaction have contributed to major failures in expensive, high-risk, and safety-critical systems (including aircraft and other transportation incidents and accidents, over dosage via biomedical devices, nuclear power generation). A key element of collaborative systems is the user interface, a reduced representation of the physical system and its mode logic. The user interface provides a means for the user to observe the underlying automated system as well as for the user to apply inputs to the system. As collaborative systems become pervasive and more complex, user interfaces will inevitably become prohibitively complex for intuitive design, and the need for formal methods and tools to aid in user interface design paramount. The proposed research focuses on the development of methods and tools to design correct user interfaces for complex, collaborative systems such as air traffic control and aircraft flight management systems. The user interface will be designed to be correct with respect to actual behavior of the underlying system and robust with respect to identification of faults. That is, we will design interfaces not only to be correct under standard operation, but also to be effective under abnormal operating conditions such that the pilot or air traffic controller will be able to detect faults or other inconsistencies.

Research Objective:

The incorporation of a human in the loop introduces considerable difficulty from a control theoretic point of view, and yet, effective collaboration between the human and the automation can enable performance simply not possible under fully automated or full human control, especially when the environment may be dynamic or uncertain. The main contributions of the proposed research are the development of: a) a computationally efficient formal framework and estimation technique for fault detection (for operation under abnormal conditions e.g., mode confusion) by developing both deterministic and stochastic mathematical models of pilot and automation based in hybrid systems, and b) abstractions for fault-free user interface design (that assume the user interface is a reduced representation of the actual underlying system under standard and abnormal conditions). The proposed research will develop general control algorithms and interface design aids that have potential to improve the effectiveness of collaborative systems with high impact: aircraft flight management systems, air traffic control, robot platforms for de-mining, search and rescue operations, and others.

 a) Mode Confusion Detection:

As modern commercial aircraft becomes more complex, its operation is more reliant on the automation for its improved operational capabilities in terms of accuracy and efficiency. However, due to dynamic and uncertain environments and for high-level decision making, the pilot is still placed either in-the-loop or on-the-loop when interacting with the automation. Developing a high-assurance pilot-automation system is vital to guarantee the safe operation of commercial aircraft. Many aviation incidents/accidents reported in the NASA Aviation Safety Reporting System (ASRS) reports, including the recent San Francisco Asiana crash can be attributed to the emergent pilot-automation interaction issues such as "lack of mode awareness or mode confusion". Mode confusion occurs when the pilot loses his/her situational awareness about the current and future behavior of the aircraft, resulting in a loss of shared perception with the automation. In this project, we are focused on systematic investigation of the emergent issues that arise in human-machine systems (HMS). Particularly, we have proposed a formal framework for verification of flight deck mode confusion under both deterministic and probabilistic conditions of the automation and pilot. Formal verification enables systematic exploration of all possible system states and inputs to prove that either there is or isn't a risk of mode confusion. To facilitate this, we have studied mathematical modeling methods for the automation and pilot and further proposed computationally efficient abstraction procedure for feasible verification so that safe flight operations can be ensured.

The basic idea of our work is the following: the emergent pilot-automation interaction safety issues such as mode confusion typically happens when there is a conflict between the pilot's perceptions/goals about the expected aircraft's behavior and the goal of the automation based on which the aircraft's actual behavior (as designed by the designer) is generated. Thus, this is clearly a problem happening in the perception/intention domain. To capture this characteristic of mode confusion, an intent-based mode confusion detection framework is proposed. A flight intent is an abstract state that captures the causal relationship between the aircraft's observed behavior and the automation's or the pilot's control commands. By inferring and comparing the intents of the automation and the pilot using heterogeneous information such as continuous flight data (e.g., altitude and speed of an aircraft) and discrete flight data (e.g., flight modes such as VNAV, V/S), ATC advisories, and flight plan, the safety (equivalently, mode confusion detection) is addressed at a higher level (i.e., flight intent) than the underlying flight modes and continuous states of an aircraft, resulting in mapping the infinite dimensional continuous state space into the finite dimensional intent state space of much smaller cardinality. Thus, efficient detection of mode confusion is possible. Below we describe our proposed approach for verification of flight deck for mode confusion detection under both deterministic and probabilistic conditions.

1st Year: Deterministic formal verification for mode confusion detection:  in this research, we have proposed deterministic Markov-based formal models of the automation (aircraft) and pilot as deterministic linear hybrid system (with interacting logical behavior (e.g., flight modes transitions) and physical behavior (e.g., the aircraft’s continuous state changes such as altitude change)) and finite state machine (FSM) respectively. To address the computationally formidable challenges with verifying hybrid systems, we have proposed an intent abstraction procedure to obtain an intent-based FSM model for the automation while preserving the critical information for mode confusion as shown in Figure 1. The concept of intent is introduced to capture the causal relationship between the aircraft's observed behavior and the factors affecting them i.e., infer the reason behind the automation's or pilot's control commands, for the mode confusion detection purposes. The intent set is constructed as a 3-tuple of the speed intent which belongs to the speed set comprised of Accelerate, Decelerate, Constant Speed elements, lateral intent which belongs to the lateral set comprised of Turn Left, Turn Right, Constant Heading elements and vertical intent which belongs to the vertical set comprised of Climb, Descend, Constant Altitude elements, where each of the intents is an abstraction of the partitioned continuous subspace into three mutually exclusive discrete domains based on the aircraft's continuous state rates. For example, the speed of an aircraft which is the aircraft's continuous state is abstracted into > 0 (accelerate), < 0 (decelerate), = 0 (constant speed) and accordingly, the speed intents are defined. Similar is the case with the vertical and lateral dimensions. By the composition of 3-tuple intent sets, complex behavior of aircraft can be succinctly described using a smaller intent set of at most 27 3-tuple intents. On the other hand, the pilot's decision making behavior is directly modeled in the intent space as an intent-based FSM. The intent inferences for the automation and pilot are performed using hybrid combination of information such as the aircraft’s continuous states (e.g., altitude, speed) and flight modes (i.e., V/S, ALT HLD), and automation’s (e.g., current altitude reaches the target set altitude) and pilot’s inputs (e.g., turn down the ALT knob on the MCP). The intent-based FSMs for the automation and pilot are then synchronously composed to formally verify the pilot-automation system for mode confusion (expressed in Action Computation Tree Logic (ACTL)) using the NuSMV model checker. The proposed intent-based abstraction framework is demonstrated with two real mode confusion incidents/accidents called Kill-the-Capture incident and Bangalore accident.

 

<Figure 1: Schematic of the proposed abstraction for the deterministic linear hybrid model>


2nd Year: Probabilistic formal verification for mode confusion detection:  building upon the above research, in this research, we have proposed an intent-based formal verification framework for flight deck mode confusion under probabilistic conditions as shown in Figure 2, which is not a well-studied problem. The formal verification requires precise mathematical models of the automation and pilot which in general are not straightforward to define under stochastic conditions. Our research explicitly accounts for the interacting continuous and logical dynamics of the aircraft and captures both the continuous and discrete uncertainties inherent in the automation system by modeling it as a stochastic linear hybrid system (SLHS). To facilitate efficient model checking, the SLHS is abstracted through an intent abstraction (using the same flight intent set as before but with a small positive number included in the inequality to account for uncertainties when determining the sign of the continuous state rates) to obtain an intent-based Markov Decision Process (MDP) which preserves the information crucial for mode confusion detection. This abstraction process is a bit more involved compared to the deterministic case because it is necessary to compute the intent transition probability matrix (which is dependent on continuous and discrete uncertainties) to completely describe the evolution of the automation's intent. This is addressed by propagating the joint probability distribution of the aircraft's continuous and discrete states subject to mode-specific continuous dynamics under the Multiple Model propagation framework such as Interacting Multiple Model (IMM) or State Dependent Transition Hybrid Model (SDTHM). Further, to capture both the probabilistic and nondeterministic uncertainties in the pilot’s decision making behavior, the pilot is modeled as an intent-based MDP with the pilot intent transition probability matrix defined to compute the evolution of the pilot's intent. The pilot intent transition probability matrix captures the uncertainties in the pilot's inputs and his/her knowledge about the automation functioning. This is unique in the sense that most existing probabilistic methods model only the pilot’s or the driver’s probabilistic response behaviors using a Markov chain. The probabilistic uncertainties are typically due to the pilot’s expertise, and pilot’s delayed response, while the nondeterminism arises due to the control inputs influenced by the environmental conditions. The pilot and automation MDPs with flight intent as their discrete states are then synchronously composed and formally verified for mode confusion (expressed in Probabilistic Computation Tree Logic (PCTL)) using probabilistic model checker such as PRISM. The proposed framework is demonstrated both qualitatively and quantitatively as to whether the benchmark Kill-the-Capture mode confusion incident in fact happened or not with a certain set probability threshold under the continuous and discrete uncertainties of the automation's and pilot's behaviors. Currently, we are working on applying this probabilistic framework for mode confusion on several other test cases.

 

 

<Figure 2: Probabilistic mode confusion detection framework for feasible formal verification>

 

3rd Year: Learning-based approach to mode confusion detection: 

In the current year, we have investigated the same human-machine interaction mode confusion detection problem but under a different set of conditions. This was motivated by the consideration that obtaining detailed mathematical models of the autopilot and flight management system with a priori knowledge of system parameters (e.g., capture rates), automation logic, and sensor noise statistics are extremely difficult due to their proprietary nature of them. In order to relax this assumption, we have proposed a learning based method for the automation and pilot intent inference that works with the flight data. This was further tested and demonstrated with data generated by the Multi Aircraft Control System (MACS). In this approach, the automation is modeled using a Generalized Fuzzy Hidden Markov Model (GFHMM) which requires no mathematical model information while the hidden states (i.e., the automation intents in our case) are inferred based on the observations as shown in Figure 3.

<Figure 3: An input-output model for the automation intent inference using GFHMM>

A GFHMM is the extension of the classical Hidden Markov Model (HMM) in the fuzzy domain. Thus the parameters of such a model are transition probability matrix and emission probability matrix which can be learnt using a Viterbi-like algorithm from the flight data. The reason to introduce fuzzy logic in the intent inference process is two-fold: 1) map the hybrid (i.e., continuous and discrete) state and input information into discrete flight intents. 2) in real applications, it's hard to delineate a crisp boundary between the different inferred flight intents as they typically depend on heuristic thresholds. By working in the fuzzy domain, the vagueness of automation intents can be formally incorporated into the intent inference logic using membership functions.

In the automation intent inference, when mapping the continuous observation (e.g., vertical speed) to the discrete intent states, the grade of certainty of each specific intent (e.g., "Climb fast") is determined by the corresponding membership function. In this work, a Gaussian shape membership function is considered. The shape of a Gaussian membership function is further determined by its two parameters: mean and covariance, whose values are estimated using data driven clustering technique such as fuzzy C-Means clustering. After applying the membership functions in each of the three dimensions for all the observations, the membership value matrices are formed. These are further used for the estimation of transition and emission probabilities of the GFHMM using a maximum likelihood estimation such as fuzzy Viterbi algorithm with the flight data set.  In the pilot intent inference, an intent-based FSM model is considered but with the pilot thresholds learnt from the flight data. A statistical method is used to set the pilot thresholds based on the distribution of the difference between the actual continuous state and the target state when the flight modes are in their HOLD settings (e.g., ALT HLD, HDG HLD, SPD HLD). The proposed learning based PAI issue detection algorithm has been tested with the flight data generated by the MACS containing a variety of pilot-automation interaction issues such as the pilot may turn the MCP knob too much, or with a delay, or even in wrong direction, the pilot may forget the details of the automation logic as in the "Kill-the-Capture" incident

The next step in our research is to integrate the proposed intent-based pilot-automation interaction issue detection method with the UI model towards adaptive UI design to minimize the probability of potential mode confusion instances and provide improved situational awareness to the pilot. In this way, the safety of complex, collaborative systems such as aircraft can be vastly improved.

 

References:

1. J.S. Nandiganahalli, S. Lee, and I. Hwang, "Intent-based Abstraction for Formal Verification of Flight Deck Mode Confusion," AIAA SciTech 2016: AIAA Infotech@Aerospace, San Diego, CA, January 2016 (Best Student Paper Award for "Intelligent Systems: Human-Machine Interaction" Student Paper Competition).

2. J.S. Nandiganahalli, S. Lee, and I. Hwang,  "Formal Verification for Mode Confusion in the Flight Deck Using Intent-Based Abstraction", Journal of Aerospace Information Systems (JAIS), Vol. 13, No. 9 (2016), pp. 343-356.

3. J.S. Nandiganahalli, S. Lee, and I. Hwang, "Flight Deck Mode Confusion Detection using Intent-based Probabilistic Model Checking," AIAA SciTech 2017: AIAA Infotech@Aerospace, Grapevine, TX, January 2017 (Finalist for the Best Student Paper Award for "Intelligent Systems: Human-Machine Interaction" Student Paper Competition - 3rd place).

4. H. Lyu, J. S. Nandiganahalli, and I. Hwang, "Human Automation Interaction Issue Detection Using a Generalized Fuzzy Hidden Markov Model," AIAA SciTech 2017: AIAA Infotech@Aerospace, Grapevine, TX, January, 2017 (Finalist for the Best Student Paper Award for "Intelligent Systems: Human-Machine Interaction" Student Paper Competition - 2nd place).

5. H. Lyu, J. S. Nandiganahalli, and I. Hwang, "Intent-based Learning Method for Flight Deck Human-Automation Interaction Issue Detection," Accepted to the Journal of Aerospace Information Systems (JAIS), 2017.