BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How To Make Robots Behave More Like People — And Why It Matters

Following
This article is more than 3 years old.

Last year, a friend of mine invited me to take his Tesla TSLA for a spin to try its self-driving features, called Autopilot. My friend knew I research the ways that humans come to trust — and distrust —  automation. Of course, I said yes!

As we navigated on Autopilot, I quickly caught on to the Tesla’s ability to recognize a stopped car ahead of us and began trusting the Tesla to stop itself. So much so that, at one point, my friend suddenly had to alert me to stop as we were approaching a red light. The Tesla would not automatically brake in this situation, he explained. The car was capable of detecting a physical object in front of it but was not programmed to stop at red stoplights.

I didn’t know the car’s limitations until then. And the car didn’t have any way of “knowing” what I was thinking or realizing that I wasn’t preparing to stop. Had my friend not been with me, explaining the automation to me, I very likely would have gotten into an accident — in a car designed to keep me safer. Fortunately, Tesla released its new “Traffic Light and Stop Sign Control” feature earlier this year to address this need.

Experiences like these are a humbling reminder that even though it seems we’re on the cusp of a fully automated, robot-driven future, we still have a considerable way to go. My incident took place behind a steering wheel, but this kind of scenario can also happen on a factory floor, at home, in a hospital, or anywhere else where humans increasingly rely on automated systems. For engineers, it certainly might be simpler to not have to factor in the complexities of human behavior when designing autonomous systems. However, for automation to successfully interact with humans, ignorance is not bliss; both humans and autonomous systems need to develop accurate “mental models” of how the other operates.

Creating a Mental Model

Humans instinctively create mental models for everything and everyone we interact with. When we drive a car, for instance, we develop a feel for how much we have to turn the wheel to get around a corner or how much weight we need put on the accelerator to get up to a particular speed. When we collaborate with a teammate, we assess what motivates them, how they like to get things done and when to step in or back off. Importantly, we’re also naturally inclined to constantly tweak these models as we gain more experience with another person, situation or system. Our mental models may not always be accurate, but they are always there, informing our decisions and actions.

While our human teammates learn about us too, autonomous cars, robots and other systems typically don’t have the capacity to “sense” how we operate, or determine what a person may be thinking or how they may respond, let alone discern the individual tendencies of different people. But to effectively work with humans and keep them engaged, robots and other autonomous systems do need to have their own kind of mental model. Without one, even someone like me — a mechanical engineer who designs autonomous systems — may expect a machine to behave in a way that is actually beyond its capability, with little to no recourse to correct what’s gone wrong. Flashing lights, loud sounds and strong vibrations — such as on the dashboard or steering wheel — can reengage someone, but they still don’t cater to specific users or capitalize on their individual strengths, abilities or knowledge.

Learning from Human Teammates

Constructive human-human partnerships are often characterized by either one or both individuals exhibiting a high level of emotional intelligence, or in simpler terms, being a “people person.” A people person is good at connecting with and interpreting those they interact with; as a result, they are a more effective communicator and collaborator who can get the most out of the people around them. So, autonomous systems need their own version of emotional intelligence to do the same for the humans they interact with. This may include, for example, discerning behaviors such as overtrust and disengagement and then taking action to reengage humans, or sensing fear and anxiety and taking action to reassure. There is no one-size-fits-all solution with humans, and our newer technology needs to reflect that.

Researchers in the broader fields of human-robot and human-machine interaction are studying many facets of this problem and have developed some possible solutions — such as reading human emotions based on tone of voice or facial expression — but much remains to be done. If innovators don’t properly account for this important need in automation, the consequences can be detrimental not only to human productivity, but also to human safety and purpose. No matter how autonomous a given technology is, humans must remain in the loop in order for both human and machine to work to their fullest potential. It’s in our own interest — and safety — to be aware of this as we continue to design and use autonomous systems moving forward.

Follow me on Twitter or LinkedInCheck out my website