Automation Transformed How Pilots Fly Planes. Now the Same Must Happen With Cars

Illustration: Elena Scotti/GMG
We may earn a commission from links on this page.

The future of driving is supposed to feel like flying. The names some car companies give their newest technology—Autopilot, Pilot Assist, Super Cruise, Pro Pilot—are all aviation-inspired terms being used by automotive companies for their semi-autonomous systems. Just like with an airplane’s autopilot, the thinking goes, the driver pushes a button and the thing flies itself. Except even that interpretation is wildly wrong.

Airplane autopilot systems, and what pilots must do while activated, are much more complicated than that. Until the automotive industry and regulators reconcile a cartoonish version of semi-autonomous features with the reality of how to use them safely, the future may not be nearly as safe as one might hope.

And time is running out. More automated safety features than ever will be standard equipment on lower-priced models this year, bringing what were once expensive luxury features to the masses.

In the next few years, it seems plausible, or even likely, that many humans and machines will be partners in driving. And like any relationship, these partnerships can get complicated.

“What we’re going to see in the future is a general decrease in crashes, we’ll see improvements in safety across the board,” predicted Michael Manser, a researcher at the Human Factors Program at Texas A&M University with almost two decades of experience studying how human driver behavior changes with new technology. “But you’re going to start to see a secondary layer of problems start to crop up. And I think a big part of these are going to relate to these breakdown in partnership between the system and the driver.”

The big question facing the automotive industry, one nobody has the answer to, is whether this secondary layer of problems will, over time, end up causing more crashes than the automation prevents.

Luckily, there is another mode of transportation that has gone through a very similar technological transformation: airplanes. The 40 years of research and real-world challenges in the airline industry are replete with lessons, including how to best prepare drivers for this transformation.

In talking with several researchers who have studied so-called “human factor” issues extensively, as well as pilots about their experience in the aviation industry, the word I heard most often about the auto industry’s approach is “concerning.”

Automation Transformed How Pilots Fly Planes. Now the Same Must Happen With Cars

Almost to a person, they’re afraid of the cavalier attitude car companies—some more than others—are taking towards semi-autonomous features without grappling with how these features change human behavior. They’re worried drivers, inadequately trained on these new features, glean its capabilities from brand names, resulting in a misunderstanding of what driver assist programs actually do. And as has had to happen in the world of aviation, the future of driving may involve re-training humans entirely on what tasks they’re actually doing.

“We really put that work into it, that training,” says NASA researcher Steve Casner about the aviation industry’s long road to automation. “But I don’t see that happening in cars. I see assumptions.”

As with so many things, the question is not whether history can be any guide, but if we bother to listen. And in this case, what changes in how we train and educate drivers will occur as this technology moves forward.

Semi-Automation Is Here For A While

For several years, the common belief has been that semi-autonomous features were merely a bridge to driverless cars. For example, The National Highway Traffic Safety Administration has an entire page dedicated to the topic, which begins:

The continuing evolution of automotive technology aims to deliver even greater safety benefits and Automated Driving Systems (ADS) that — one day — can handle the whole task of driving when we don’t want to or can’t do it ourselves. Fully automated cars and trucks that drive us, instead of us driving them, will become a reality.

That same NHTSA page estimates that the new era of “fully automated safety features” will begin in 2025, a blink of a regulatory framework eye. As a result, few dwelled on what this partnership between computer and human would look like.

Partly, this was because the features that rolled out first were so-called passive monitoring systems, like beeping if you strayed from your lane on the highway, or backup cameras.

While both useful and problematic in their own ways, these features didn’t inspire visions of cars operating with minds of their own, careening into solid objects as the driver helplessly braces for impact.

Instead, they altered our behavior in more subtle ways. For example, David Kidd, a senior research scientist at the Highway Loss Data Institute, found in a series of studies that people using backup cameras don’t look over their shoulders or check their mirrors as often. They have much greater rear visibility due to the camera, but worse lateral visibility because they’re not checking multiple perspectives as often. They’re also much more susceptible to hitting objects in shadows, such as from overhanging trees, because the dynamic range of backup cameras are quite poor.

But two developments, one gradual, the other sudden, have recently brought those concerns outside of the realm of academic study and into the mainstream, because the consequences could be far greater than backing up into an unsuspecting light pole.

The gradual development has been the creeping realization that these driver-assist features are going to be around for a long time. They’re not a mere temporary bridge to driverless cars, as many have previously assumed.

Driverless cars feel about as far away from mass-market implementation as they did five years ago, and even industry leaders like Waymo’s CEO John Krafcik make it sound like autonomous vehicles will, at best, be confined to cities with year-round nice weather for the foreseeable future.

And then something very high-profile occurred recently that’s drawn direct parallels to autonomous vehicle technology. The Boeing MAX 8 scandal, where one of the world’s leading airplane manufacturers released a model with a faulty automatic safety feature that many reports indicate was responsible for two major crashes in five months.

In the second crash, pilots reportedly spent several minutes wrestling with the computers, unable to disable the feature that allegedly plunged the plane into the earth. A total of 346 people died in those crashes.

Although some generalist science writers, such as The New Yorker’s Maria Konnikova, have been writing about these types of automation issues for years, the Boeing scandal seems to have served as a wake-up call of sorts for, unexpectedly, the car industry.

After all, if even airlines, which have spent the better part of 40 years figuring out how humans and computers can best work together to fly planes—it is the namesake of Tesla’s driver assist program, after all—could still be figuring things out, what does this mean for cars, where these features will be deployed to tens if not hundreds of millions more people in far less controlled environments?

Think of the computer like a co-driver, handing off certain tasks from one to the other. If each co-driver isn’t on the same page about what the other is doing—say, the computer’s ability to stay in a lane even when the lane markers are faded or when a mattress on the roof obscures cameras—then it could result in instances where, in effect, nobody is driving the car.

The consequences of these gaps could be dire. Kidd warned that when semi-autonomous features fail, they “don’t fail gracefully. Basically, they throw the driver immediately, and typically unexpectedly, into a state where they have to recover from a serious error.” For example, the NHTSA is investigating a potential issue with the Nissan Rogue’s automatic emergency braking system engaging when it shouldn’t (a Nissan North America spokesperson told Jalopnik the issue was corrected in a software update earlier this year).

The technology can work perfectly the vast majority of the time, lulling humans into a false sense of security as they gain confidence in the system, only to be woefully unprepared for the critical moment of failure.

In airplanes, pilots typically have some time, perhaps as much as a few minutes, to troubleshoot these failures. But any driver experiencing one of these failures would be incredibly fortunate to have three seconds. Most wouldn’t have any time to react at all until it’s far too late, and they won’t be doing so in empty air.

Look to the Airplanes

The solution, the researchers stress, is better driver training—any driver training, really—that recognizes the fundamental truth that the driving task is changing.

This is precisely the subject of a recent paper by Casner and University of California, San Diego professor Edwin Hutchins. Both Casner and Hutchins spent decades studying human factor problems in aviation before switching to cars, anticipating the industry shift. Coincidentally, their paper, titled “What Do We Tell the Drivers?”, was published two days before the second Boeing MAX 8 crash that resulted in the model being grounded.

Casner told me that the difference in how the two industries approach safety is massive (and, to use that word again, concerning). Thanks to studies conducted by NASA and the military—not to mention independent researchers—as far back as the 1970s, the aviation industry learned early on that humans react to automation in unpredictable ways and need extensive training to counterbalance those tendencies.

Computer programs that fly planes—or drive cars—are immensely complicated with millions of lines of code while operating in complex environments. This leads to computers doing things the humans don’t expect, or not doing things humans do expect. Such disconnects, scenarios known as “automation surprises,” happen so frequently that they spawned an entire field of study.

One example is what’s called Primary-Secondary Task Inversion, or when, for instance, a pilot stops paying attention to the altimeter to determine when he or she has hit the appropriate altitude and instead waits for the alarm to notify that he or she’s approaching said altitude. In other words, the task of flying the plane is replaced with the task of minding the alarm.

This may not sound like a big difference, but it turns out to introduce all kinds of new problems. Here’s one example, from the Casner and Hutchins paper:

In 1988, a flight crew chatted about a non-flight-related topic just before takeoff. Little did they know, they had forgotten to set the airplane’s wing flaps before they departed. The airplane stalled and crashed moments after takeoff, killing 2 crew members and 12 passengers. The airplane had a warning system onboard designed to automatically detect and call out mis-set flaps. The problem: the system wasn’t working that day.

While errors seldom result in fatalities in aviation, they do happen frequently. In fact, a 1995 study conducted by NASA found that U.S. pilots self-reported altitude deviations due to automation surprises at the alarming rate of one per hour. The title of the study, taken from the words of a pilot who overshot his prescribed altitude, is “Oops, It Didn’t Arm.”

Researchers also found that pilots—and indeed, most humans—aren’t very good at staring at a computer for hours waiting for it to screw up. In another study, Casner put 18 pilots in a Boeing 747 simulator and regularly asked them what they were thinking to determine how focused they were on the task at hand. Pilots frequently reported their minds wandering when they were supposed to be monitoring the system.

Casner concluded that automation wasn’t freeing up the pilots’ mental energy to focus on flying better, but to think and talk about other stuff. Instead, they got distracted.

As Casner and Hutchins make clear, the airline industry didn’t wait around to solve automation with more automation. Among other things, researchers and pilots worked together—and, over time, many pilots like Casner became researchers themselves—to deepen their understanding of the problem and figure out solutions.

What helped most, Casner told me, was not adding thousands of pages to airplane manuals (which manufacturers did) or adding yet more automation in an attempt to automate away the automation surprises (which they also did).

Instead, it was training pilots to be aware of what their new job entailed, and how it was different from flying the planes they used to fly.

The training ensures pilots understand the right state of mind to be in, the technology itself, and gain a firm grasp of their role versus that of the autopilot. One such innovation was identifying pilots who were prone to trusting what the computer told them over their own observations.

To break this behavior, those pilots received special courses to demonstrate how even the most sophisticated autopilot programs can be wrong sometimes. These and other “human factor initiatives,” Casner and Hutchins argue in their paper, resulted in historically low crash rates beginning in the 1990s.

Back to the Future

Turning their attention to cars, Casner and Hutchins write that “From the perspective of the aviation industry, it’s the 1980s all over again.” As in, before airlines structured training programs specifically to address these so-called human factor problems and crash rates fell.

When a driver buys a car with any of these features, the most training they could possibly get is a brief introduction to the technology at the dealership—often as much as a sales pitch as a safety procedure—sandwiched somewhere between how to adjust your seat and how to pair your phone with the infotainment system.

The public face of this has been, for good and for bad, Tesla. An early leader in active semi-autonomous driving systems beyond just passive ones, the company named its driver assist feature “Autopilot,” conjuring images of pilots flipping a switch and letting the plane fly itself, even though Tesla’s feature encourages drivers to keep their hands on the wheel at all times.

This driver monitoring system though, is easily circumvented by well-placed water bottles, oranges, or as one Tesla salesperson told me as his preferred technique, the driver’s knee, which fool the car into thinking the driver’s hands are on the wheel. There are entire threads on Tesla owner forums dedicated to how best to trick Autopilot. Users posted photos of their “setups;” one involved a nylon wrap around the wheel, a fishing hook through the wrap, with a weight tied to the end of the hook. 

When asked about these obvious violations of the company’s policy for Autopilot use, a Tesla spokesperson referred Jalopnik to the company’s website, which explains that Autopilot measures the amount of torque applied to the wheel to detect if the driver’s hands are on the wheel as required:

The system’s hands-on reminders and alerts are delivered based on each unique driving scenario, depending on numerous factors including speed, acceleration, road conditions, presence of other vehicles, obstacles detected, lane geometry, and other sensor inputs,” the website explains. “If the driver repeatedly ignores those warnings, they will be locked out from using Autopilot during that trip. Additionally, if a driver tries to engage Autopilot when it is not available, they will be prevented from doing so.

The spokesperson further stated that the Tesla sales team is trained on how to demonstrate Autopilot including to keep their hands on the wheel at all times.

Regardless, a number of Autopilot crashes have happened, and some of them illustrate the precise “human factor” issues these researchers warn about.

In the fatal crash of Joshua Brown, which was the subject of an National Transportation Safety Board investigation, the driver overly relied on automation by demonstrating a “lack of understanding of the system’s limitations,” while the NTSB said Tesla didn’t go far enough to ensure drivers remained alert. The Tesla spokesperson, speaking generally about Autopilot, defended the company’s approach to Autopilot by pointing out any car could be misused.

In other words, the Joshua Brown case is one perfectly in line with decades of human factor research. It is both a technical problem and a behavioral one.

Not every company is as cavalier about its driver assist programs. Cadillac’s Super Cruise system is generally regarded as one of the most responsible semi-automation systems in the industry because it uses a camera mounted on the steering wheel to ensure the driver’s eyes are on the road, though its rollout has been slow. To date it’s only on one car, the Cadillac CT6 sedan, and not even the rest of the Cadillac lineup. That may speak to GM’s more conservative approach.

Automation Transformed How Pilots Fly Planes. Now the Same Must Happen With Cars

Lisa Talarico, who works as a lead on driver monitoring system performance for Super Cruise, said dealerships are “well trained” on the feature, but drivers must only watch a short video about the system that she described as a “very general overview.”

But she added that in small test studies they’ve found “only a small percentage increase” in driver off-road glances when Super Cruise is activated versus manual driving. (Several other manufacturers, including Nissan and Volvo, declined to be interviewed for this story.)

Talarico also said Super Cruise’s technical specifications are detailed in the owner’s manual. Indeed, every car’s manual will likely have ample descriptions of any automated systems the car includes.

But Manser, the Texas A&M University human factors researcher, warns that even if someone actually reads the damn thing, owner’s manuals typically focus on what you shouldn’t do, absolving the manufacturer of legal liability. Proper human factor training is about a whole lot more than that.

What this training ought to include, according to the researchers I spoke to, is a deep understanding of how the technology works so humans can anticipate problems before they occur. While you really didn’t need to know how a car worked in order to drive one in the manual age, you probably should have an understanding of what your car’s cameras can (and can’t) detect and how that information is interpreted in order to spot dangers on the road.

“I think that these new cars, oddly, so paradoxically, so ironically, in doing more for us, they don’t require us to know less,” Casner noted. “They require us to know more.”

In much the same way pilots had to gain a deeper understanding of automated systems—indeed, that very lack of understanding may have contributed to the Boeing crashes as the pilots couldn’t figure out how to disengage the automated system that was plunging them towards the ground—these experts say drivers now have to as well.

That’s a tall task for a country that has largely given a driver’s license to damn near anyone who wants one. In the 1970s, 95 percent of eligible students received drivers education through public schools, but thanks in part to a 1983 NHTSA study that concluded Georgia teenagers who took drivers’ ed were not better drivers, public schools—often faced with budget cuts—largely did away with such programs. (A recent and more comprehensive eight-year study in Nebraska found that, in fact, drivers ed classes do make better teenage drivers.) Additionally many schools cut behind-the-wheel training over liability issues.

In conjunction with the widespread use of largely unregulated online driving courses, it’s far too easy for teenagers to plunk down a few hundred dollars of their (or their parents’) cash and get a license without learning much about operating a car safely. Of course, earning the privilege of driving when you’re 17 years old generally lasts for your entire life.

It’s hard to imagine many customers sitting through such an extensive tutorial or presentation before buying a car, given that such a program may be more extensive than what they had to do to get their license in the first place. If you found out you had to, say, take even just a one-hour class at the dealership in order to buy a car, would you do it? Or would you buy a different car?

It’s All About the Benjamins (And Also Liability)

What ultimately may prevent the auto industry from taking the same training steps as the airline industry is the issue of liability. Manufacturers and possibly airlines are liable if a plane crashes due to computer error—Boeing has already been sued for the Ethiopian Airlines crash—because customers are paying, in part, for a safe journey. The automation hand-off on planes is between computers and pilots, who are employees of the airlines. In either case, the customer is not taking control of anything.

With semi-autonomous cars, the legal picture is much murkier, as have been discussed in countless legal reviews and academic journals. For one, drivers are simultaneously the “co-driver” and the customer. On top of that, the current legal framework basically makes the driver responsible for what the car does, except in the case of extreme manufacture defect, tampering, or other edge scenarios.

“Ultimately, in our society, the driver will always be responsible for what happens with that car,” Manser says. “If you’re under, say, [Level] 3 or [Level] 4 automation and you’re looking at your phone real briefly and you get in a crash, it’s going to come back to you as the driver. You’re responsible for whatever that car does.”

Consider, for example, when Uber’s self-driving test car struck and killed a pedestrian while in full-autonomous mode. Prosecutors declined to hold Uber accountable even though it was the company’s car, equipment, and algorithm that functionally killed this person.

Prosecutors have not yet determined whether to press charges against the safety driver, who, by allegedly streaming a TV show on her phone at the time of the crash, was exhibiting the very indicators of the boredom and thought creep reams of literature on the airline industry warn about (Uber quickly settled a civil suit with the victim’s family).

At the very least, the legal framework for both criminal and civil cases is nowhere near close to addressing questions such as: was the failed hand-off between computer and human the algorithm’s fault or the human’s fault?

As a result, the automotive industry has no obvious incentive right now to train drivers properly. Instead, it has every incentive to allow misconceptions about the car’s capabilities to linger that exaggerate its abilities, which may boost sales. To this end, one Tesla salesman told me that for some customers he takes test cars on a road near the store with a sharp turn to demonstrate how well Autopilot works, even though the road is not a highway, the only place Autopilot is supposed to be used.

Little wonder, between stunts like this, tweets from Elon Musk that mischaracterize Autopilot’s capabilities, or Musk himself misusing the feature on 60 Minutes, that Tesla owners feel empowered to use the feature where it’s not intended.

Even Cadillac, the most cautious of the manufacturers, debuted Super Cruise with an ad spot featuring the tagline “it’s only when you let go that you begin to dare.”

Manser told me he feels sorry for drivers because they’re being thrown into what he calls an “untenable position.” As he described it to me, it sounded almost like a trap.

Drivers are being sold features on their cars that delegate more and more responsibility of actually driving the car to computer programs created by the manufacturer. And the manufacturer—plus the dealers—give drivers only the briefest tutorial on the system, hand them the keys, and tell them to have fun.

What could possibly go wrong?

Is It Actually Safer?

The counterargument to all this, one I heard several times in these interviews, is that surely these driver assistance features will be safer than humans alone, who are by and large terrible at driving. In 2017, the latest year for which data is available, 37,133 people died in motor vehicle crashes, according to the IIHS. How could computers be any worse?

The researchers I spoke to who are concerned about the lack of driver training happily acknowledge semi-autonomous driving features have the potential to make roads safer if done right. But as of now, these features are for use on highways. And highway driving is one thing humans actually do pretty well.

According to Kidd, the Highway Loss Data Institute researcher, about a third of all miles traveled in the U.S. are on highways, but only nine percent of crashes, meaning highway travel is significantly under-represented in terms of where we’re crashing our cars.

“People are pretty good at traveling on the highways,” Kidd says, “so to what extent will it actually lead to a marked improvement in safety? That question is still out there.”

Automation Transformed How Pilots Fly Planes. Now the Same Must Happen With Cars

Part of the reason the question is still out there that is we don’t have good data on, well, anything related to semi-autonomous features. Kidd says there are three reasons for this. First, the technology is still too new. Second, there is no good public information on which vehicles on the road actually have semi-autonomous capabilities. And third, independent researchers don’t know when drivers are actually using it.

(Tesla does issue quarterly safety reports, but it only includes accidents-or-near-misses-per-mile for when Autopilot is engaged versus when it is not, a somewhat misleading statistic because Autopilot is, per company policy, exclusively used on highways, which, as stated above, have a much lower crash-per-mile rate for human drivers, too. In a recent earnings call, Musk said they won’t be releasing more data because he believes people would “sort of like data mine the situation and try to turn a positive into negative.”)

Kidd says the best solution here would be some kind of data sharing agreement between manufacturers and the government, akin to what the Federal Aviation Administration has in place to detect issues early and present industry-wide solutions. But the government would probably have to regulate that into existence, which could take years—the law always lags behind technological advancement, as has become painfully obvious in recent years.

Recently, GM, Ford, and Toyota announced they would launch a consortium for autonomous vehicle standards, but the details are still murky, and it isn’t clear if such a group would delve into semi-autonomous features. Nor is it clear how transparent this group will be with their data.

Even so, Kidd is skeptical an industry-wide data sharing program will happen any time soon because manufacturers are racing to the trillion-dollar goal of fully self-driving cars and don’t want to risk sharing their secret sauce with competitors. Indeed, car companies have ample incentive to hide as much information about their proprietary semi-autonomous systems as possible.

The flip side of that is, because of the way liability for car crashes works, car companies also have very little incentive to enter these kinds of safety-focused data sharing agreements Kidd advocates for. After all, car companies are only liable for deaths or injuries on the road in cases of manufacturing defects.

But people crashing their cars because they don’t understand the technological or engineering limitations of their machines? Hell, that’s driving.

Please, Sir, May I Have Some Training

My last question for the researchers was what any individual consumer should do if they buy a car with semi-autonomous features. How can they train themselves? How can they be safe?

I expected their answers to make me feel slightly better about the whole situation. Instead, they only made me feel worse.

Casner’s message, short of re-tooling the entire drivers’ education structure including bringing it back to high schools, was, essentially, be really careful.

“My biggest message is that there is so much more going on here that meets the eye … more than what your common sense is telling you,” he said. “And more than what the car companies are telling you. There is more to this than pushing a few buttons, sitting back and relaxing, and enjoying the ride. There are things every driver ought to know about the car, about themselves, and about this new weird driving task they’re about to undertake.”

Automation Transformed How Pilots Fly Planes. Now the Same Must Happen With Cars

Manser, using the careful language of an academic who has spent years researching a subject, advises people to “understand every operational aspect.” When the system turns on and off. How to do it yourself if the system doesn’t. What it’s going to do in emergencies. What it’s going to do in “normal” situations.

“They need to form an accurate mental model of how that system operates,” Manser continued. “Because if your model and the system’s model are different, there’s going to be conflict.”

Where can people find that information if not in the owner’s manual or the dealerships? I told Manser it sounded like he was saying people need training that doesn’t exist yet.

“Yeah,” he conceded. “That’s probably right.”