BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Human Or Automation — Who Is The Expert?

Following
This article is more than 3 years old.

I distinctly remember the first time I used MapQuest. I was in high school, and after years of watching my dad plan family road trips using our trusted Rand McNally Road Atlas, MapQuest was a revelation. All I had to do was type in my destination and print out the turn-by-turn directions, down to the tenth of a mile. No more having to look for random landmarks or ask strangers for directions. My route was automatically prepared for me.

As revolutionary as it was, though, MapQuest still forced me to exercise my navigational skills and problem-solve along the way. If a road was blocked because of construction, my computer wasn’t in the car with me to recalculate the route. I had to figure out how to continue the rest of the way without getting lost. If traffic slowed things down, I was stuck with being late.

That’s a far cry from how we interact with today’s GPS apps and devices. With the Google Maps app, for instance, I have an automated decision aid — a form of automation that is capable of real-time adaptation — at my fingertips. Even though humans don’t trust automation to unilaterally make every important decision, we are very willing to incorporate these types of aids into our lives. This is both because of how helpful they are but also because automated decision aids only provide guidance or recommendations that we as users have the choice to follow; this keeps us in control.

Even still, this less intelligent form of automation poses the same pitfalls as those arising from any human interaction with automation, namely a growing dependence that can weaken our abilities and skills. As we embrace autonomous systems, we must ensure we don’t lose human expertise in the process so that we can still exercise it in times of need. 

With an automated decision aid serving as a teacher or guide, there’s no denying humans learn and solve problems more quickly. Humans can navigate without Google Maps giving us every turn, but we’re able to get where we’re going more efficiently and easily with it. Similarly, intelligent tutoring systems can calibrate to a student’s proficiency by providing customizable lessons, either making them more challenging with mastery or revisiting an area in which the student is struggling.

But, with a constantly present and knowledgeable guide, users have a tendency to stop thinking for themselves. Google Maps makes adjustments so quickly that when I am in an unfamiliar environment, I (admittedly) just follow its instructions. I still need to drive my car and be mindful of my surroundings when using the app, but I don’t have to take on the responsibility of actually knowing how I’m getting where I’m going. If Google Maps disappeared tomorrow, could I navigate using a map, or my dad’s Rand McNally Road Atlas? I would figure it out, but I would definitely be rusty.

This tendency towards complacency isn’t limited to navigation. An operator working on a power line who blindly relies on an automated diagnostic tool may not learn how to identify faults themselves in the event that the diagnostic tool fails. Similarly, studies have found that many aircraft pilots have become so reliant on automation that there is actually a need to improve the pilots’ manual flying skills. In both of these cases, the user has become ill-equipped to handle a situation when something goes wrong. Making matters worse, automated systems are only as “smart” as what they’ve been programmed to know and do. If the unexpected happens, relative to the automation’s programming, there’s no guarantee that it can give any helpful advice, potentially leaving the user stranded.

How do we, then, balance our intelligence with automation? By understanding the trap of overreliance that humans can inadvertently fall into, we can be smarter about how we engage with autonomous systems. A strength of humans has always been our ability to draw on myriad learned experiences — whether from a specific course of training, or over the course of our lives — that we can then apply to new or critical situations. The rapid adoption of automation in society, then, poses an opportunity for innovation in education to maintain — and advance — human expertise for performance in highly uncertain situations or environments.

Moving forward, professional training for various skillsets should shift to specifically focus on preparing humans to perform when automation falls short, rather than to excel at repeatable or routine tasks. Revamping and customizing our training practices to reflect the current workplace will better ensure humans are gaining the necessary expertise to succeed. If done effectively, we will be positioned to leverage our unique abilities as humans along with those of automation. As engineers create the next generation of autonomous systems, we would be wise to remember that automation benefits us most when it supports and enhances our capabilities without weakening them.

Follow me on Twitter or LinkedInCheck out my website