I am not a fan of most products of academia designed to improve on things we come up with in aviation to improve safety but there are exceptions. Threat Error Management can be really useful, but not in the way it appears in most textbooks. This is my attempt to fix what is broken and make it really valuable for Crew Resource Management.
Everything here is from the references shown below, with a few comments in an alternate color.
During one of my jobs as a check airman for a 14 CFR 135 management company, I tended to get the problem crews for evaluation. My instructions were to evaluate, train if possible, and recommend dismissal otherwise. So I got to see more of the worst than the best. I knew intuitively that the best crews plan for the worst case scenario, even when the worst case almost never happened. But I soon got to see crews who always hoped for the best and ignored any thoughts of what could go wrong. I came to realize these pilots didn't believe there were threats out there trying to kill them. They lacked threat awareness.
Since I had long ago started tuning out the alphabet soup that the academics were preaching in the latest generations of CRM, I missed out on early discussion about "Threat and Error Management." But, as it turns out, TEM was singing my song.
[Cortés, Cusick, pp. 131-132]
Now the academic speak might be clouding the good ideas here, so let's look at TEM in these steps, but translated into pilot speak:
As operators, we can add an item to this list with one more source of data: personal experience. In order to do that, however, we need to be honest with ourselves and our observations.
[Helmreich] Operationally, flightcrew error is defined as crew action or inaction that leads to deviation from crew or organizational intentions or expectations. Our definition classifies five types of error:
You can identify threats well before a flight using the many reports generated after previous flights and training events. But some threats do not show up explicitly in these reports and it will be up to you to spot traits that lead to procedural intentional non-compliance, a lack of proficiency, or poorly designed Standard Operating Procedures. You will also have to be on the look out for real time errors made operationally. In short, you need to pay attention if you hope to identify threats.
[Helmreich] Three responses to crew error are identified:
You should already have in place technological, procedural, and technique-based answers to all known threats. TCAS and EGPWS, for example, are technological solutions. I often get questions about something an airplane is doing (or failing to do) that is answered in the AFM or AOM; don't ignore the obvious sources of solutions. Finally, there may be a great technique out there that someone else is using that you've never heard of. You should search the many online sources as well as users group. If you have a great technique that can prevent an accident, you should document it and take steps to spread the word.
Rather, when we train new crewmembers, we need to get confidence that they will be able to meet the problems that may come their way—even if we do not yet know exactly what those will be.
These aim to build up an inventory of techniques for the operation of an aircraft, or—nowadays—a set of competencies to that end.
This is because formal mechanisms of safety regulation and auditing (through e.g. design requirements, procedures, instructions, policies, training programs, line checks) will always somehow, somewhere fall short in foreseeing and meeting the shifting demands posed by a world of limited resources, uncertainty and multiple conflicting goals.
It is at these edges that the skills bred for meeting standard threats need transpositioning to counter threats not foreseen by anybody. The flight of United Airlines 232 is an extreme example. The DC-10 lost total hydraulic power as a result of a tail engine rupture, with debris ripping through all hydraulic lines that ran through the nearby tailplane in mid-flight. The crew figured out how to use differential power on the two remaining engines (slung under the wings, below the aircraft’s center of gravity) and steered the craft toward an attempted landing at Sioux City, Iowa, which a large number of passengers (and the crew) subsequently survived.
The Threat and Error Management (TEM) Model, as constructed by researchers at the University of Texas at Austin, is quite useful, but suffers in its complexity and language. Let's take a look at it and see what we can come up with that is more useful for pilots.
Photo: Error management model, from Helmreich, figure 2.
Click photo for a larger image
[Helmreich] Undesired states can be:
[Helmreich] There are three possible resolutions of the undesired aircraft state:
I am a big fan on the original Cockpit Resource Management concept, and maybe even some of the next iterations of Crew Resource Management. But then, after the academics put in their $0.02 I started to lose interest. But the University of Texas idea changed my view of the academicians, and I think their TEM model has promise. But I do have a few complaints:
I think we can take this model, refocus, and come up with something really useful . . .
Successful crews think about the aircraft, the environment, and everything to do with the act of aviation, and try to envision what the threats to safe operations are while coming up with countermeasures to address these threats. These countermeasures become "Plan B" in case the threat occurs. Sometimes these crews have a Plan B, sometimes they have to create one on the fly. However the Plan B comes about, it needs to be evaluated after use and adjusted for the future.
Example from one of my recent flights: "It was a good flight to Teterboro and I am happy to be here. It was a busy arrival and approach control threw us a few curve balls and I could have been more precise on the descent. They gave us a short descent and the FMS angle looked good. But it wasn't, of course, because ATC decreased the distance we had to go. I should have calculated the distance manually as soon as we got the clearance, but I didn't. As it was, I realized my mistake and the speed brakes saved me in the end, but it wasn't my finest effort." At that point the other pilot asked about techniques on estimating the descent angle and we had a good discussion. We'll both do better next time.
There is a "gotcha" in the Gulfstream G450 avionics that can end up catastrophically, I write about it here: G450 Vertical Mode Trap. Looking at how we came up with solution shows the iterative Plan B process in action.
Cortés, Antonio; Cusick, Stephen; Rodrigues, Clarence, Commercial Aviation Safety, McGraw Hill Education, New York, NY, 2017.
Cortés, Antonio, CRM Leadership & Followership 2.0, ERAU Department of Aeronautical Science, 2008
Dekker, Sidney and Lundström, Johan, From Threat and Error Management (TEM) to Resilience, Journal of Human Factors and Aerospace Safety, May 2007
Helmreich, Robert L., Klinect, James R., Wilhem, John A., Models of Threat, Error, and CRM in Flight Operations, University of Texas
Copyright 2019. Code 7700 LLC. All Rights Reserved.