Autopilot

Thanks for joining me for another edition of the SerenityThroughSweat blog.  I’m working my way through Influence The Psychology of Persuasion by Robert Cialdini and he made an interesting connection that I wanted to explore here.

The book covers seven levers of influence that work on a subconscious psychological level to run and manage our behavior.  Each lever of influence is discussed in depth with fascinating experimental examples to illustrate not only how powerful these effects are but also how much we underestimate their importance.

I highly recommend it as a read for an overall enhanced perspective as well as increased ability to recognize and combat what he calls “click, run” behavior.  Behavior that can be triggered by a lever of influence (click), and then like a computer program, is run almost without our knowing.

In the chapter on social proof, the lever of influence that we are more inclined to do what everyone else is doing, Cialdini likens this neural response to an autopilot.

“the evidence it offers is valuable, with it we can sail confidently through countless decisions without having to investigate the pros and cons of each.  In this sense, the principle equips us with a wonderful kind of autopilot device not unlike that aboard most aircraft. Yet there are occasional, but real, problems with autopilots.  Those problems appear whenever the flight information locked into the control mechanism is wrong.”

I know a thing or two about autopilot usage. Training on modern aircraft is essentially broken down into two parts, the aircraft systems, and the FMS or the flight management system which is comprised of flight computers and autopilots.

These flight management systems have become so complex, and so integral to aircraft operation, that learning how to manipulate the system is just as important as being able to manipulate the aircraft itself.

In a statistically invalid survey of my aviator friends as well as my own observations, most commercial flights are controlled by the autopilot for upwards of 95% of the flight.  This makes sense, the autopilot doesn’t fatigue, is more fuel efficient, is reliable, and consistent.  It is also dumb.

By this I mean the autopilot is very good at doing what it is told, even if what it is told will not produce a desirable outcome.  It cannot think, it can only execute.  This is one of the most common issues with autopilot related incidents, not that the autopilot malfunctions per se, but that in some way the autopilot is not doing what the pilot wants it to.

This generally happens for a number of reasons, including: the pilot puts an incorrect input into the system, the pilot wants to change an input in the system and fails to do so, or one input conflicts with another input and the computer “chooses” which one to follow based on its programming.

Some more concrete examples of the above mentioned improper pilot-autopilot interface are: a pilot setting the altitude to 10,000 feet when they really meant to set 12,000, a pilot trying to depress the button to initiate a descent but failing to depress the button fully and not engaging the descent mode, and a pilot inputting 10,000 feet and engaging the descent mode but failing to realize he also put in a constraint at 12,000 feet where to autopilot will stop the descent.

In each of these examples, the autopilot can fly the aircraft with a higher level of consistency, accuracy, and reliability, than the pilot, and it will do so to the wrong altitude.  As pilots, when we interface improperly with our autopilots, bad things tend to happen. The same thing can be said, and is by Cialdini, about the autopilot systems of our brain. 

The dilemma with all of the levers of influence as presented by Cialdini, is that they are mostly benefitial to our lives.  The sheer volume of information that we process everyday can be overwhelming, and these levers of influence offer real life neural short cuts, evolutionarily proven methods of making the better decision without the costly investment in analysis. “Because the autopilot afforded by the principle of social proof is more often an ally than an antagonist, we can’t be expected to want to simply disconnect it”

A beautiful morning for a run in Green Bay

It is when we have an improper input into the system, fail to engage the system in the way that we truly desire, or have conflicting system inputs, that our neural autopilot causes us problems. As pilots we develop procedures and checklists to help avoid these errors, and when all else fails we disconnect the automation and fly the airplane.

That is the core message at the end of every chapter from Cialdini. All of these levers of influence are based on automatic systems in the brain, and that we need to understand how to manage the inputs, and when all else fails to disconnect the system and fly ourselves.

I know I am often a slave to my routine, and while that makes some of my decision making easier, it also leaves me susceptible to those same autopilot mistakes. Sometimes it is refreshing to click off the automation and re-experience the beauty of operating the machine, whether it is a Boeing or a human athlete.

Thanks for joining me, stay safe and stay sweaty my friends.