Crew Resource Management (Plan B)
Having started my professional pilot career in the days before Cockpit (and then Crew) Resource Management, I immediately realized that CRM was a good thing. But over the years I think we’ve watered the concept down, had a few detours, and now seem to talk a good game without really embracing the original concepts. After several generations of CRM, we still witness accident after accident where crew leadership and followership could have prevented the loss of life or aircraft, but failed. The science of CRM has become art: we know good CRM when we see it, but are hard pressed to quantify it in a way that allows us to teach it to the next generation of aviators.
From Sky Gods: The Fall of Pan Am . . .
How do you think you'll like having a seniority number and taking orders from more senior crew members?
The interviewers lean forward to hear the answer. This is the nitty gritty. The interviewee clears his throat.
I'm very conscious of the chain of command, sir. I have no problem taking orders. I know that at Pan Am the captain is absolutely in command.
I think you can trace the Evolution of Crew Resource Management (CRM) by looking at just about any pilot from my era. I started flying crew aircraft in 1980 when the captain's word was final and the crew dare not challenge that word. Then Cockpit Resource Management came about to teach captains to listen and the crew to speak up. Then the crew behind the cockpit door was added and we got Crew Resource Management. Then a bunch of academics got involved and pretty much diluted everything to its current state of meaninglessness.
Meaninglessness? Perhaps that's too strong a word. The problem, as I see it, is that we first tried to teach CRM in the classroom and failed; it isn't an academic exercise. So we abandoned that idea and said everything in the simulator was an exercise in CRM. But that doesn't work either. Our simulator events are really exercises in Kabuki Theater. Everyone has a role to play and you rarely see the actors behaving as they do in the real world. Sure, we can detect good CRM in the box now and then, given the right circumstances. But when we detect bad CRM, there are usually other challenges to tackle first. It's time to think about another way.
It may be useful to examine the Traits of Successful Crews to see effective CRM in action. I think if you do that, you will see they employ a variation of the Latest Generation of CRM without knowing it. And maybe that is the key. Let's stop calling it CRM and focus on what it is these crews know that others don't.
The Evolution of Crew Resource Management (CRM) Through 1999
I started flying aircraft with more than just one pilot in 1980. The venerable KC-135A had two pilots, a navigator, and an air refueling boom operator. In a few years I traded the boom operator in for a flight engineer and radio operator. A few years after that I got rid of the navigator. My Air Force career spanned from 1979 to 1999, years that coincided with the evolution of Crew Resource Management through its first five generations.
Photo: Air Florida 90, from The Washington Post.
Click photo for a larger image
The first generation of CRM
If you had never experienced it, you would have a hard time believing it. But aviation culture back then gave the captain total and absolute power. The crew had none. For a good example, see the case of: Pan American World Airways 6005.
[Cortés, Cusick, pp. 128-129]
- In 1972, a Lockheed L-1011 operating as Eastern Airlines Flight 401 flew into the Everglades in Florida as all members of the flight deck crew were focused on a burned-out light bulb. During the troubleshooting, no pilot had been assigned the task of flying the aircraft and, although an altitude discrepancy was noticed by air traffic control, the flight gently descended without the crew noticing the critical controlled flight into terrain (CFIT) problem.
- [In 1978] A Douglas DC-8 operating as United Airlines Flight 173 crashed near Portland, Oregon, after running out of fuel, killing 10 occupants. The accident resulted from the captain focusing too heavily on preparing the cabin for an emergency landing due to a gear malfunction, while neglecting both the fuel state and the increasing concerns of the other flight crewmembers who were rightfully worried about running out of gas.
- NASA decided to host a series of conferences in 1979 out of which the CRM concept was officially born. At that time the term stood for Cockpit Resource Management and was narrowly focused on flight deck crewmembers, often comprised of two pilots and a flight engineer in those days.
For more about this accident, see: Eastern Air Lines 401.
For more about this accident, see: United Airlines 173.
This first generation of CRM focused on changing individual behavior, primarily that of the captain, so that input would be incorporated from other flight deck crewmembers when making decisions.
[Crew Resource Management (Kanki, Helmreich, Anca), §1.4.4] A 1979 NASA study placed 18 airline crews in a Boeing 747 simulator to experience multiple emergencies. The study showed a remarkable amount of variability in the effectiveness with which crews handled the situation. Some crews managed the problems very well, while others committed a large number of operationally serious errors. The primary conclusion drawn from the study was that most problems and errors were introduced by breakdowns in crew coordination rather than by deficits in technical knowledge or skills. The findings were clear: crews who communicated more overall tended to perform better and, in particular, those who exchanged more information about flight status committed fewer errors in the handling of engines and hydraulic and fuel systems and the reading and setting of instruments.
You might say NASA created CRM (Cockpit Resource Management) in 1979, but it was up to the airlines, the military, and everyone else to embrace it.
The second generation of CRM
[Cortés, Cusick, pp. 129-130]
- As airline accidents with CRM components continued to happen, such as the very dramatic crash of a Boeing 737 operated as Air Florida flight 90 during a winter storm in Washington, D.C. in 1982, the industry and government continued to cooperate to shape and evolve CRM.
- As a result of these efforts, around 1984 the second generation of CRM took shape. Instead of changing individual behaviors, CRM now went deeper in an attempt to change attitudes and focus more on decision making as a group.
- Special emphasis was placed on briefing strategies and the development of realistic simulator training profiles known as Line-Oriented Flight Training (LOFT).
- By 1985 only four air carriers in the United States had full CRM programs; United, Continental, Pan Am, and People's Express. American and the U.S. Air Force Military Airlift Command soon introduced CRM programs, and the U.S. Navy and Marine Corps were on the verge of starting CRM programs.
- In 1989, a very serious accident happened that provided irrefutable proof that CRM principles worked. A DC-10 operating as United Airlines Flight 232 experienced the uncontained failure of its #2 engine, resulting in a loss of normal flight controls. The captain ably coordinated flight deck and cabin resources to perform a controlled crash of the aircraft at Sioux City, Iowa. The resulting crash killed 111, but there were 185 survivors who likely would have not survived at all had it not been for the CRM prowess of the crew.
For more about this accident, see: Air Florida 90.
I went to the United Airlines Boeing 747 Captain's course in 1986 and "CRM" was a brand new concept for me. It was obvious that my UAL classmates had already been introduced to the concept, but were skeptical. The average age in this captain's class had plummeted from 58 to 55 because of the addition of recently acquired Pan Am aircraft, routes, and pilots. These old, crusty captains were reluctant CRM participants. But they did participate.
For more about this accident, see: United Airlines 232.
The United States military and most commercial airline businesses embraced Cockpit Resource Management early and the improvements were dramatic. As more and more of the world embraced CRM, the accident rates fell accordingly. One need only examine the hold outs to fully understand just how far we've come. Examine Korean Airlines for a thirty-seven year look at a country's airlines ignoring the call for CRM, being forced into it, and then working their way around it. It isn't an Asian culture thing. The Japanese realized early on that crashing airplanes and killing passengers was a bad business model.
The third generation of CRM
[Cortés, Cusick, p. 131]
- In the early 1990s the third generation of CRM took hold, which deepened the notion that CRM extended beyond the flight deck door. That generation saw the start of joint training for flight deck and cabin crewmembers, such as for emergency evacuations.
- Around 1992 CRM also saw itself being exported to the medical community to address similar group dynamics events associated with medical error in both routine and emergency care at hospitals.
[Crew Resource Management (Kanki, Helmreich, Anca), §6.1] The term non-technical skills (NTS) is used by a range of technical professions (e.g. geoscientists) to describe what they sometimes refer to as "soft" skills. In aviation, the term was first used by the European Joint Aviation Authorities (JAA) to refer to CRM skills and was defined as "the cognitive and social skills, of flight crewmembers in the cockpit, not directly related to aircraft control, systems management, and standard operating procedures." They complement workers' technical skills and should reduce errors, increase the capture of errors and help to mitigate when an operational problem occurs.
[Crew Resource Management (Kanki, Helmreich, Anca), §6.2] As concern about the rates of adverse events to patients by medical error grew, medical professionals began to look at safety management techniques used in industry. One technique that has attracted their interest is the training and assessment of non-technical skills.
Much of the rest of the world, especially that which deals with life and death issues, adopted CRM (or NTS) and have had to do the hard sell to obtain "buy in" from their professionals. Doctors, for example, may have reacted just as early airline captains did thirty to forty years ago. Meanwhile, we in the aviation world have seen decades of progress and it is a rare crewmember these days who attempts to push back.
At this point I had been flying the Boeing 747 for four or five years with a very large cockpit and cabin crew. The benefits of CRM, Crew Resource Management, was obvious and I was "all in." It had become an ingrained part of aviation for me. But that wasn't going to last long . . .
The fourth generation of CRM
[Cortés, Cusick, p. 131]
- Around the mid-1990s the fourth generation of CRM was introduced, which promoted the FAA's voluntary Advanced Qualification Program (AQP) as a means for "custom tailoring" CRM to the specific needs of each airline and stressed the use of Line Oriented Evaluations (LOEs).
Most of the United States Air Force was on board with CRM at this point, though we were generations behind. (And that might be a good thing.) There was a move to add ground team members (maintenance, air traffic control, the weather forecasters, etc.) but I think saner heads realized this would only dilute our focus. By now I had been a crew commander in the Boeing 707, Boeing 747, and Gulfstream III. I had settled on a style of crew management that seemed to work and started to ignore the more recent CRM innovations as "eye wash." (Military jargon decoder: eye wash, something with the sole purpose of looking good, not necessarily being good.)
The fifth generation of CRM
[Cortés, Cusick, p. 131]
- Around 1999 the fifth generation of CRM took hold, which reframed the safety effort under the umbrella of "error management" modified the initiatives so as to be more readily accepted by non-Western cultures, and placed even more emphasis on automation and, specifically, automation monitoring. Three lines of defense were promoted against error: the avoidance of error in the first place, the trapping of errors that occur so that they are limited in the damage they create, and the mitigation of consequences when errors cannot be trapped.
"Error Management" as a concept was completely alien to me. I didn't want to manage errors, I wanted to avoid them. The terminology combined with a growing list of textbooks designed to teach CRM soured me on the entire idea. I knew how to run a crew, albeit an Air Force crew. I thought there wasn't much ground left to cover. I became a civilian pilot in the year 2000 and started to notice most my corporate aviation peers were not "all in" when it came to CRM, some were not even on board with the first generation version of it, and a spate of accidents served to show more work was left to be done:
Traits of Successful Crews
I spent a fair amount of time on dysfunctional cockpit crews. I also got a jumpseat view of a few more as an instructor and examiner pilot. But I've been fortunate in that for most of my career, most of the crews I've flown with seemed to have "gotten it." The "it" was how to work together as a team before, during, and after moments when things didn't go as planned. So here are the traits I've notice on all successful crews.
Effective briefings focus on likely threats
Crewmembers willing to point out their own errors and willingly accept critique
When time permits, decision making is shared
When time is critical, experts are recognized and solicited; but the decision remains with the captain
Post flight critiques are treated as learning exercises
Problem solving and "chair flying" exercises are highly valued
More experienced and professional pilots emphasize prior planning
In my first crew aircraft, the KC-135A, we were consumed with our wartime mission and briefings tended to focus on those things. We also had a heavy dose of bureaucracy so an approach briefing, for example, got into the minutiae. None of it helped us for the task at hand: flying an unreliable airplane that tended to do things we weren't expecting.
In my next crew aircraft, the Boeing 707 (EC-135J), we briefed every Immediate Action item prior to takeoff. At first, this seemed like a waste of time. The aircraft was much more reliable than the KC-135A and things hardly went wrong. At first. But as our budget declined and spare part evaporated, engines started failing. We had massive fuel leaks. We had cabin fires. Hydraulic systems simply quit. Those briefings started to pay dividends.
Over the years, I've come to appreciate pre-departure briefings that focus on what can go wrong. Over the years I've added this focus on "what can hurt me?" to other briefings.
A threat focused briefing is nothing new; see: Briefing Better, by the Royal Aeronautical Society. I like their idea of "threat forward" briefings but am not sure about the need for briefing cards and specifying a structure of who talks and when. These are things that are probably best left to each operator. I would also caution against ditching anything that can kill you if forgotten.
When you start flying, you might do well to approach everything with a fair amount of humility. You are learning to defy gravity, after all. But for many of us, especially those of us with Air Force or Navy training, the personality type needed is of a more "aggressive" nature. That is certainly needed in some respects, but it can be damaging in others. Pilots who are unwilling to face their own mistakes are doomed to repeat them.
When you become a captain in a multi-person crew you may feel the pressure to once again be the never-failing crew commander who never shows any weaknesses. But you will make mistakes. Whether or not these mistakes are detected by the crew, denying, excusing, or ignoring the mistakes will not serve you well.
I found early on that captains who don't confront their own weaknesses tend to have a lot of them. Captains who openly point out their mistakes seem to be working harder to learn from them. But this openness has another effect: it encourages the crew to do the same and telegraphs to them that they can correct the captain when that becomes necessary.
Nothing will shut down a crew faster than when the captain makes a quick, wrong decision when time would have allowed crew input. And it gets worse if, when after making this quick, wrong decision, the captain refuses to listen to a better idea.
On the other hand, nothing builds crew cohesiveness better than a captain who takes a breath and asks for the crew's input, even when the captain knows the right answer without that input.
I noticed this was a problem trait of many of the crew commanders I flew with way back in 1980, but something that has become a bit of a rarity since then. I guess all this talk about CRM has had an impact. But the problem hasn't gone away. It seems there are personality types predisposed to this kind of behavior.
We rarely find ourselves in situations where a decision has to be made instantly, but when this happens you can't very well go through a long and drawn out discussion about what to do. But the captain should also recognize that sometimes the right answer is sitting in the right seat.
Our Hawaii Boeing 707s had very rudimentary fire detection systems that simply lit up if the sensor cable around the "cold section" of the engine detected heat. They couldn't tell the difference between a broken sensor cable and a fire, and they tended to over-report. But we had four engines and if the sensor detected a fire, we had to shut the engine down. One day the sensor reported a fire and the captain decided to press on from California to Hawaii. It took the copilot an hour to convince the captain to turn around. The engine was on fire and they landed with it moments away from blowing itself right off the wing. I was the squadron's safety officer and when the squadron commander nominated the captain for an award, I insisted the copilot get the award instead.
But the crew rarely has the level of experience of the captain and sometimes the right answer is for the captain to go with his or her initial thought. I had an engine fail at V1 while taking off from Dallas Love Field (KDAL) during my last year with that squadron. The airplane had been through extensive maintenance and this was its first flight in nearly a year. Love Field's runway was short for us and when the engine quit I only had a second or two to glance at the instruments. After we were airborne I asked the copilot, "what do we have?" He said it was just a failed number three engine, he recommended a holding pattern to sort things out. But he had failed to notice what I caught during my quick glance at the gages: none of them lined up. With three of our four engines running, you would expect the EPRs, RPMs, EGTs, fuel flows, and oil pressure to be in the same ball park. One of the advantages of a four-engine aircraft is you have lots of things to crosscheck. I couldn't articulate in the moment why I felt an urgency to get the aircraft on the ground, but I did feel the need to land as soon as possible. So I pointed the airplane at a nearby runway, Carswell Air Force Base, and landed. The entire flight took six minutes. As it turned out, the fuel and oil lines on all four engines had only been hand tightened and we were moments away from losing them all. Sometimes the captain has to overrule the crew and act.
One of the many unusual aspects of military flying was that every flight was treated as a training event, even those that were operational. Even if your mission was a simple one — fly a load of passengers from Point A to Point B — you never failed to end the day with a recap of what you did and what you could have done better.
Another aspect we had in most of my Air Force squadrons is very similar to life as a corporate pilot. When you show up for the new job, you bring a unique set of experiences and will share a cockpit with other crewmembers who also bring with them their own personalized skill set. You learn to respect the fact someone else has done it before, and no matter the age level your crewmate may have the right answer.
But I think many of us civilians divide our training and operational lives into two, very neat segments. We go to initial and recurrent training in the learn mode, but back at home we are simply in the "do mode." When the trip is over, we are reluctant to hang around and discuss the mechanics of the trip and what may have gone well or not so well. The problem with this bifurcated method is that the best training opportunity may very well be in our cockpits, not our simulators.
Flying an airplane that is more computer than cables and pulleys, most of the problems we see on the road have little to do with the syllabus pushed on us during formal training. And some of the threats out there that can kill us, are completely unknown to the instructor behind the curtain in the "box." See G450 Vertical Mode Trap for an example.
Not too long ago, if you were flying an older aircraft, chances are there were very few unknowns and most of the "gotchas" had already been found. That isn't to say there were no surprises. By the time I got to the T-37, for example, it had been around for nearly 20 years. But during the three years I flew it, we had discovered a fuel system and an electrical system problem that required an immediate ejection. But for the most part, back in those days, most problems had already been discovered and you only needed to read the manual or find an "old head" to figure things out.
These days, even if you are flying a "classic," you will have to adapt to an endless slew of avionics upgrades and aging aircraft requirements. For example, even if you have twenty years flying the old reliable Gulfstream IV and spent most of that flying the North Atlantic Tracks under the Minimum Navigation Performance Specifications (MNPS) with your good ol' reliable HF, everything has changed on you several times over. MNPS has been replaced by the North Atlantic High Level Area and while your High Frequency radio is still required, you will need to learn Data Link to fly most of it these days.
It is my experience that the worst crews are those that consider their profession a job and that if it is something worth learning, it has to be taught by an official training vendor. The best crews, on the other hand, are those that consider every facet of aviation to be part of the learning exercise, and that every crewmember is an instructor. They never miss an opportunity to think things through before the flight. "Chair flying" is something we learned to do flying the T-37B; we sat at our desks and visualized the flight to come, trying to imagine the aircraft's flying characteristics before actually going out to fly. The technique was even more important for formation flying. Having to track four aircraft in a mock dog fight required us to think in three dimensions; chair flying helped immensely. These days, flying "single ship" we can still benefit from thinking each procedure through. Good crews will do this as crews, trying to anticipate the surprises to come.
[Lutat, pp. 51-52] We are confident in our assessment of how to plan as an expert does when operating complex aircraft, but in this case we are assisted by a solid body of research that demonstrates how important it is to adopt the skills of the best performers to improve our own. As the adoption of advanced aircraft was well established and accelerating in the late 1990s, veteran researchers Carolyn Prince and Eduardo Salas discovered two critical differences between a study group of pilots that included less-experienced, general aviation (GA) pilots with an average experience level of 720 hours, more-experienced airline pilots with an average level of experience of 6036 hours, and check airmen, considered to be among the best commercial pilots with an average experience level of 12,370 hours. First, as experience increased, so did preflight preparation: the best pilots spent more time and obtained better information through planning specific to the upcoming flight. They gathered as much information as possible during the time devoted to this phase of flight, including details specific to flight conditions, conditions of the aircraft, and made provisions for contingencies that the less-experienced pilots did not. Second, the check airmen were more likely than the other two groups to focus on what researchers classify as "level 3 SA" (a mental model of future states based on a projection of current conditions and its dynamics), by developing a pattern of proactive planning, organizing and understanding a larger amount of information, and understanding the relationships among various factors affecting the outcome of the flight or mission. By doing so, these pilots pulled away from their less-expert colleagues (GA pilots and average airline pilots), increasing the safety margin of their crew and the organizations they fly for. Clearly, the lever that level 3 SA provides should be reason enough to engage in planning that seeks better, more-detailed information that can be used to reliably shape favorable outcomes. In the United States in 2009, ten years after the findings of this research were published, GA pilots accounted for 474 fatalities compared to 50 fatalities among scheduled airlines, and 7.2 accidents per 100,000 flight hours-nearly 50 times the accident rate (0.149 per 100,000 flight hours) of the more experienced crews of scheduled airlines.
"Level 3SA" was defined as the highest level of "Situation Awareness." Aviation "situational" awareness predates this academic study, but the researchers insist on "situation" awareness.
Latest Generation of CRM: Threat and Error Management (TEM)
During one of my jobs as a check airman for a 14 CFR 135 management company, I tended to get the problem crews for evaluation. My instructions were to evaluate, train if possible, and recommend dismissal otherwise. So I got to see more of the worst than the best. I knew intuitively that the best crews plan for the worst case scenario, even when the worst case almost never happened. But I soon got to see crews who always hoped for the best and ignored any thoughts of what could go wrong. I came to realize these pilots didn't believe there were threats out there trying to kill them. They lacked threat awareness.
Since I had long ago started tuning out the alphabet soup that the academics were preaching in the latest generations of CRM, I missed out on early discussion about "Threat and Error Management." But, as it turns out, TEM was singing my song.
The sixth generation of CRM
[Cortés, Cusick, pp. 131-132]
- At the start of the 21st century, the sixth generation of CRM was formed, which introduced the Threat and Error Management (TEM) framework as a formalized approach for identifying sources of threats and preventing them from impacting safety at the earliest possible time.
- Threats can be any condition that makes a task more complicated, such as rain during ramp operations or fatigue during overnight maintenance. They can be external or internal. External threats are outside the aviation professional's control and could include weather, a late gate change, or not having the correct tool for a job. Internal threats are something that is within the worker's control, such as stress, time pressure, or loss of situational awareness.
- Errors come in the form of noncompliance, procedural, communication, proficiency, or operational decisions.
- To assess the Threat and Error Management aspects of a situation, aviation professionals should:
- Identify threats, errors, and error outcomes.
- Identify "Resolve and Resist" strategies and counter measures already in place.
- Recognize human factors aspects that affect behavior choices and decision making.
- Recommend solutions for changes that lead to a higher level of safety awareness.
Now the academic speak might be clouding the good ideas here, so let's look at TEM in these steps, but translated into pilot speak:
- CRM skills provide a primary line of defense against the threats to safety that abound in the aviation system and against human error and its consequences. Today’s CRM training is based on accurate data about the strengths and weaknesses of an organization. Building on detailed knowledge of current safety issues, organizations can take appropriate proactive or remedial actions, which include topics in CRM. There are five critical sources of data, each of which illuminates a different aspect of flight operations. They are:
- Formal evaluations of performance in training and on the line;
- Incident reports;
- Surveys of flightcrew perceptions of safety and human factors;
- Flight Operations Quality Assurance (FOQA) programs using flight data recorders to provide information on parameters of flight. (It should be noted that FOQA data provide a reliable indication of what happens but not why things happen.); and
- Line Operations Safety Audits (LOSA).
As operators, we can add an item to this list with one more source of data: personal experience. In order to do that, however, we need to be honest with ourselves and our observations.
[Helmreich] Operationally, flightcrew error is defined as crew action or inaction that leads to deviation from crew or organizational intentions or expectations. Our definition classifies five types of error:
- Intentional noncompliance errors are conscious violations of SOPs or regulations.
- Procedural errors include slips, lapses, or mistakes in the execution of regulations or procedure. The intention is correct but the execution flawed;
- Communication errors occur when information is incorrectly transmitted or interpreted within the cockpit crew or between the cockpit crew and external sources such as ATC;
- Proficiency errors indicate a lack of knowledge or stick and rudder skill; and
- Operational decision errors are discretionary decisions not covered by regulation and procedure that unnecessarily increases risk.
You can identify threats well before a flight using the many reports generated after previous flights and training events. But some threats do not show up explicitly in these reports and it will be up to you to spot traits that lead to procedural intentional non-compliance, a lack of proficiency, or poorly designed Standard Operating Procedures. You will also have to be on the look out for real time errors made operationally. In short, you need to pay attention if you hope to identify threats.
Identify procedures already in place
[Helmreich] Three responses to crew error are identified:
- Trap – the error is detected and managed before it becomes consequential;
- Exacerbate – the error is detected but the crew’s action or inaction leads to a negative outcome;
- Fail to respond – the crew fails to react to the error either because it is undetected or ignored.
You should already have in place technological, procedural, and technique-based answers to all known threats. TCAS and EGPWS, for example, are technological solutions. I often get questions about something an airplane is doing (or failing to do) that is answered in the AFM or AOM; don't ignore the obvious sources of solutions. Finally, there may be a great technique out there that someone else is using that you've never heard of. You should search the many online sources as well as users group. If you have a great technique that can prevent an accident, you should document it and take steps to spread the word.
- Resilience is the ability to recognize, absorb and adapt to disruptions that fall outside a system’s design base, where the design base incorporates all the soft and hard bits that went into putting the system together (e.g. equipment, people, training, procedures). Resilience is about enhancing people’s adaptive capacity so that they can counter unanticipated threats.
- Safety is not about what a system has, but about what a system does: it emerges from activities directed at recognizing and resisting, or adapting to, harmful influences. It requires crews to both recognize the emerging shortfall in their system’s existing expertise, and to develop subsequent strategies to deal with the problem.
- As for crewmembers, they should not be afraid to make mistakes. Rather, they should be afraid of not learning from the ones that they do make. Self-criticism (as expressed in e.g. debriefings) is strongly encouraged and expected of crew members in the learning role. Everybody can make mistakes, and they can generally be managed. Denial or defensive posturing instead squelches such learning, and in a sense allows the trainee to delegitimize mistake by turning it into something shameful that should be repudiated, or into something irrelevant that should be ignored. Denying that a technical failure has occurred is not only inconsistent with the idea that they are the inevitable by-product of training.
- Resilient crews:
- . . . are able to take small losses in order to invest in larger margins (e.g. exiting the take-off queue to go for another dose of de-icing fluid).
- . . . realize that just because a past chain of events came out okay doesn't mean it will the next time.
- . . . keep a discussion about risk alive even when everything looks safe.
- . . . study accident reports of other aircraft, even if a dissimilar aircraft type or operation, realizing the lessons learned can be applicable.
- . . . realize information must be shared because each individual may only have a fraction of the total information needed.
- . . . are open to generating and accepting fresh perspectives on a problem.
Come up with a Plan B
- We cannot prepare our crews for every possible situation.
- Most of our training puts a focus on technical skills.
- There will always be something that we have not prepared crews for.
- At some point, crews will have to improvise "outside the margins."
Rather, when we train new crewmembers, we need to get confidence that they will be able to meet the problems that may come their way—even if we do not yet know exactly what those will be.
These aim to build up an inventory of techniques for the operation of an aircraft, or—nowadays—a set of competencies to that end.
This is because formal mechanisms of safety regulation and auditing (through e.g. design requirements, procedures, instructions, policies, training programs, line checks) will always somehow, somewhere fall short in foreseeing and meeting the shifting demands posed by a world of limited resources, uncertainty and multiple conflicting goals.
It is at these edges that the skills bred for meeting standard threats need transpositioning to counter threats not foreseen by anybody. The flight of United Airlines 232 is an extreme example. The DC-10 lost total hydraulic power as a result of a tail engine rupture, with debris ripping through all hydraulic lines that ran through the nearby tailplane in mid-flight. The crew figured out how to use differential power on the two remaining engines (slung under the wings, below the aircraft’s center of gravity) and steered the craft toward an attempted landing at Sioux City, Iowa, which a large number of passengers (and the crew) subsequently survived.
The Threat and Error Management (TEM) Reconstructed
The Threat and Error Management (TEM) Model, as constructed by researchers at the University of Texas at Austin, is quite useful, but suffers in its complexity and language. Let's take a look at it and see what we can come up with that is more useful for pilots.
Photo: Error management model, from Helmreich, figure 2.
Click photo for a larger image
- Intentional non-compliance errors should signal the need for action since no organization can function safely with widespread disregard for its rules and procedures. One implication of violations is a culture of complacency and disregard for rules, which calls for strong leadership and positive role models. Another possibility is that procedures themselves are poorly designed and inappropriate, which signals the need for review and revision. More likely, both conditions prevail and require multiple solutions. One carrier participating in LOSA has addressed both with considerable success.
- Procedural errors may reflect inadequately designed SOPs or the failure to employ basic CRM behaviors such as monitoring and cross checking as countermeasures against error. The data themselves can help make the case for the value of CRM. Similarly, many communications errors can be traced to inadequate practice of CRM, for example in failing to share mental models or to verify information exchanged.
- Proficiency errors can indicate the need for more extensive training before pilots are released to the line. LOSA thus provides another checkpoint for the training department in calibrating its programs by showing issues that may not have generalized from the training setting to the line.
- Operational decision errors also signal inadequate CRM as crews may have failed to exchange and evaluate perceptions of threat in the operating environment. They may also be a result of the failure to revisit and review decisions made.
Error Responses and outcomes
- The response by the crew to recognized external threat or error might be an error, leading to a cycle of error detection and response. In addition, crews themselves may err in the absence of any external precipitating factor. Again CRM behaviors stand as the last line of defense. If the defenses are successful, error is managed and there is recovery to a safe flight. If the defenses are breached, they may result in additional error or an accident or incident.
Undesired State Responses
[Helmreich] Undesired states can be:
- exacerbated, or
- Fail to respond.
Undesired State Outcomes
[Helmreich] There are three possible resolutions of the undesired aircraft state:
- Recovery is an outcome that indicates the risk has been eliminated;
- Additional error - the actions initiate a new cycle of error and management; and
- crew-based incident or accident.
I am a big fan on the original Cockpit Resource Management concept, and maybe even some of the next iterations of Crew Resource Management. But then, after the academics put in their $0.02 I started to lose interest. But the University of Texas idea changed my view of the academicians, and I think their TEM model has promise. But I do have a few complaints:
- I'm not sure the differentiation between threats and errors is significant; I think the errors themselves are threats.
- The TEM process, as given, delays mitigation until after an "Undesired Aircraft State" (UAS), when in fact the mitigation can be a step to prevent the UAS in the first place.
- The model seems to ignore the learning process; you can improve future prospects by learning from the current threat or error.
I think we can take this model, refocus, and come up with something really useful . . .
Successful crews think about the aircraft, the environment, and everything to do with the act of aviation, and try to envision what the threats to safe operations are while coming up with countermeasures to address these threats. These countermeasures become "Plan B" in case the threat occurs. Sometimes these crews have a Plan B, sometimes they have to create one on the fly. However the Plan B comes about, it needs to be evaluated after use and adjusted for the future.
Plan B — the steps
- Recognize the threat — You need to pay attention here, but aircraft warning systems, air traffic control, Internet applications can all help. You can detect a threat days in advance; i.e., an advancing weather system. You can detect a threat while filing the flight plan; i.e., an EDCT. It can be a straight forward as a warning bell and CAS message. Or it can be as subtle as the other pilot nodding off. But you have to pay attention to recognize the threat for what it is.
- Leverage technology — We sometimes think about the latest technological upgrade as just a luxury item we can do without. But how luxurious is a modification that costs $100,000 that can potentially save a $10,000,000 aircraft? Synthetic vision can help you realize a circling approach cannot be continued. Infrared cameras can illuminate a runway obstruction your eyes cannot see. Predictive windshear detection systems can warn you about a windshear before it happens.
- Monitor the news — We tend to tune out things that aren't in our own spheres. Military? Airline? Corporate? Or perhaps what happens to a single pilot turboprop doesn't matter to a ultra long-range corporate jet. It all does matter; go beyond the headlines to find out.
- Focus your briefs on the threats — I recommend you rethink your canned takeoff briefing if it never changes, no matter the weather or runway length. I would also start over on your canned approach briefing if it covers such banalities as the date of the chart or the MSA in a non-mountainous area. But I wouldn't go so far as the Royal Aeronautical Society "Briefing Better" recommendation. Keep anything that can kill you if forgotten or misunderstood. Here are some examples of effective threat-focused briefings:
- Teterboro on a fine day: "This will be a flex takeoff on a long, dry runway, but our balanced field is within 2,000 feet of the runway length so any rejected takeoff near V1 will have to be with maximum reverse and braking. The departure procedure in the FMS agrees with the chart, so we will use LNAV for lateral guidance. The first altitude restriction comes immediately after takeoff at 1,500 feet and is a frequent spot for altitude busts, so we will use TOGA and a normal acceleration, but let's keep an eye on that level off altitude after takeoff."
- Aspen on a hot day: "This will be a obstacle limited takeoff, using runway analysis performance numbers. We have a high pressure altitude which will increase the true airspeed of all our performance numbers, making a rejected takeoff especially challenging. Our biggest threat comes from the mountains which, after an engine failure at V1, can only be cleared by following the procedure exactly and at V2 plus 10. The Aspen Seven procedure agrees exactly with the obstacle performance procedure, so if we lose an engine, we both need to focus on keeping the lateral needles centered and the speed on target.
- Today's approach into Bedford will be just a quarter of mile above visibility minimums. The temperature/dew point spread is just one degree so we might see the visibility go below minimums. We've already confirmed the FMS is set up for the ILS, including the missed approach. We need to confirm we get a good tune and identify before we begin the approach, we need to ensure the FMS sequences so we will get an automatic missed approach if needed. The DA is 382 feet, which gives us a 250 foot HAT, due to the hill on final approach. I will be using the EVS, but given the terrain I will not go lower than 250 feet unless I have the HIRLs in sight. The runway is reported wet so I will brake accordingly. If we don't have the runway in sight, I will press TOGA and climb immediately, retract the flaps to 20 degrees, retract the landing gear, and follow the blue needles. The procedure requires we climb to 1,000 feet, turn left on a heading of 350 degrees and climb further to 2,000 feet on our way to the ZIMOT holding pattern. We have enough fuel for two tries, but let's consider going to Boston Logan instead.
- Encourage open communication in the cockpit — If you are new to the crew, ensure everyone understands it is better to speak up when in doubt, and never shut down this kind of input unless it is inappropriate or distracting. If you aren't sure about a realtime critique, let it slide if you can and talk about it later. If the critique was in error, don't harp on it and remind everyone, "when in doubt, please speak up."
- If time permits, let the crew know what the problem is and what the SOP has to say about it. Encourage input and verify that everyone agrees that the SOP is the way to go.
- If there is no applicable SOP and time permits, either ask for input or announce what you believe should be the next steps. (Even if you know the right answer, asking for input is a good way to achieve "buy in" from the crew or to help younger crewmembers to develop decision making ability and confidence.)
- If time is critical, act according to SOP (if applicable) or as you believe is appropriate. Do not shut down crewmembers with better ideas if you can spare the time, but don't sacrifice the right (and safe) answer in an attempt to grow your options.
- Finish every flight with a complete debrief, even if it was a "textbook" flight.
- I've found the best way to begin a debrief is with the U.S. Navy Blue Angels briefing starter, "I'm happy to be here." That is their acknowledgement that despite anything that is to follow, it is a privilege to be flying professionally.
- Then, if you are the captain, begin with the high points of the flight in general but then cover the things you did that could have gone better.
- Unless you have a formal instructor role, leave it at that and allow others to critique themselves. If you have an instructor role to play, you can see if the student covers what needs to be covered and you can steer the conversation as needed.
Example from one of my recent flights: "It was a good flight to Teterboro and I am happy to be here. It was a busy arrival and approach control threw us a few curve balls and I could have been more precise on the descent. They gave us a short descent and the FMS angle looked good. But it wasn't, of course, because ATC decreased the distance we had to go. I should have calculated the distance manually as soon as we got the clearance, but I didn't. As it was, I realized my mistake and the speed brakes saved me in the end, but it wasn't my finest effort." At that point the other pilot asked about techniques on estimating the descent angle and we had a good discussion. We'll both do better next time.
There is a "gotcha" in the Gulfstream G450 avionics that can end up catastrophically, I write about it here: G450 Vertical Mode Trap. Looking at how we came up with solution shows the iterative Plan B process in action.
- Aircraft delivery — The G450's avionics represented an iterative change from previous Gulfstreams in that there was a lot to be familiar with, but the interface had big changes. Those of us from the GV and earlier were used to an autopilot that would track an altitude to level off in a climb or descent, no matter how many times you changed the target altitude.
- We did not anticipate the problem and our first event was a gradual climb through an assigned altitude without the altitude alert chime going off.
- Our existing plan to avoid altitude busts was to insist on a sterile cockpit when within 1,000 feet of an assigned level off altitude. But we allowed both pilots to divert their attention for "official" duties, such as checking an oceanic clearance.
- We learned that our sterile cockpit rules needed to trap all events and adjusted our procedures to require the pilot flying only focus on flying the aircraft within 1,000 feet of a level off.
- We also realized our understanding of the airplane was deficient and started canvassing other G450 operators. (Sadly, others had experienced similar issues but were just as clueless about the cause.)
- During this event, our Plan B did capture the threat in the act.
- Our Plan B was simply to catch the problem and fix it, which we did.
- We learned what the problem was and adjusted our Plan B to require the pilot flying to confirm an autopilot vertical mode is selected whenever acknowledging a new target altitude.
- We did not find anything in any existing literature about the problem and confirmed the problem exists with Gulfstream.
So What is CRM?
So I've spent several thousand words tearing down the notion of modern CRM as it has evolved; I've torn down theory and left only a process. Does my Plan B flowchart substitute for six generations of CRM evolution?
In a word: no. But there is purpose behind the madness . . .
What is CRM?
Crew Resource Management is something a captain does to promote an open flow of information by establishing positive relationships with all members of the crew, briefings that focus on possible threats, humility that admits errors and accepts corrections, and a track record of doing all that no matter the stress level. Crew Resource Management is something the crew assists in by participating as needed to support the captain, speaking up when appropriate, and understanding that time critical situations require giving the captain more latitude in decision making.
How can we improve CRM during stressful situations?
Planning. Captain and crew should be mindful of the unpredictable nature of aviation and the need to develop as many Plans B as possible while understanding that real life may render the plan imperfect. The plan making process is useful in that it gives crews something to draw upon when the situation turns non-normal while schooling them on how to think outside normal when it comes time to develop Plan C, Plan D, and so on when the situation calls for it on the fly.
Briefing. Pre-departure and approach briefings should focus on the threat at hand and any Plans B that can be useful. Leaving Teterboro's Runway 24 on a snowy day can mean the primary threat is a contaminated runway, for example. (Plan B: the need for a maximum effort abort or the need to nail V2 on the climb.) But on a nice, summer day the threat might be the altitude restriction on the departure procedure. Threats are situational. Arriving at Bedford on Runway 11 in the fog with the tower closed makes briefing the need to constantly listen to the AWOS visibilty report. (Plan B: an early missed approach.)
De-briefing. Every flight provides an opportunity to learn, it would be a shame to waste that opportunity. The captain should begin every de-brief with a quick discussion of the positive but then more details about the things he or she could have done better. Each member of the crew should be given a chance to add to the discussion, also noting their learned lessons. Finally, the library of Plans B should be addressed. Did your Plans B work? Can you add to the knowledge base?
Cortés, Antonio; Cusick, Stephen; Rodrigues, Clarence, Commercial Aviation Safety, McGraw Hill Education, New York, NY, 2017.
Dekker, Sidney and Lundström, Johan, From Threat and Error Management (TEM) to Resilience, Journal of Human Factors and Aerospace Safety, May 2007
Gandt, Robert, Sky Gods: The Fall of Pan Am, 2012, Wm. Morrow Company, Inc., New York
Helmreich, Robert L., Klinect, James R., Wilhem, John A., Models of Threat, Error, and CRM in Flight Operations, University of Texas
Lutat, Christopher J. and Swah, S. Ryan, Automation Airmanship, McGraw Hill Education, London, 2013.