The Boeing 737 MAX crisis shook the aviation world to its core. Two horrible crashes in less than five months left the industry scrambling to understand how something so catastrophic could happen 💥
Today, we’re going to dive into what went wrong: from design flaws to poor oversight, and how pilot training failures played a role in the tragedy.
We’ll break down the main differences in the 737 MAX’s design compared to other Boeing 737’s, the controversial MCAS system, and what led to the crashes.
Of course, we’ll also look at the aftermath and the lessons we can learn from all this 💡
Let’s get started ⤵️
We don’t publish all our Notes from the Cockpit (like this one) publicly, some are shared only by email. Get the next one sent straight to your inbox ⤵️
Boeing 737 MAX Accidents Overview
The first Boeing 737 MAX crash happened on 29 Oct 2018, followed by the second one on 10 Mar 2019.
Let’s briefly go over what happened for each of these accidents. See whether you notice any similarities ⤵️
Lion Air Flight 610
On 29 October 2018, Lion Air Flight 610 (a Boeing 737 MAX 8), took off from Jakarta towards Pangkal Pinang.

Shortly after takeoff, the crew reported issues with flight controls, altitude, and airspeed to Air Traffic Control (ATC).
These problems were caused by a faulty Angle of Attack (AOA) sensor that had been replaced just two days prior. This replacement sensor was actually faulty, and had a 21 degree error in its readings.
This caused quite a few technical consequences, such as:
🔸 Airspeed and altitude disagreements between the left and right hand side
🔸 Activation of the stick shaker (a system designed to warn crew of an impending stall)
And most importantly:
🔸 The faulty AOA sensor triggered a new system called the Manoeuvring Characteristics Augmentation System (MCAS), which incorrectly activated multiple times
MCAS repeatedly pushed the aircraft’s nose down, as it had the wrong AOA information and thought the aircraft angle of attack was higher than it actually was.
The crew attempted to regain control, but they didn’t know about this new system, as MCAS was not covered in their training or manuals. They were unable to get control of the aircraft, partly because they were distracted by a number of conflicting warnings and indications, as well as ATC communication.
Despite earlier indications of similar issues during the plane’s previous flights, these problems weren’t properly documented or understood by the earlier flight crew. This was partly because key alerts (like the AOA Disagree alert) were not installed.
As a result, the aircraft entered an uncontrollable dive and crashed into the Java Sea, killing all 189 people onboard:

Ethiopian Airlines Flight 302
On 10 March 2019, Ethiopian Airlines Flight 302, a Boeing 737 MAX 8, departed Addis Ababa Bole International Airport at 05:38 UTC, bound for Nairobi.

Shortly after takeoff, the left Angle of Attack (AOA) sensor began providing incorrect data, causing the aircraft’s MCAS to activate.
MCAS, again, mistakenly pushed the aircraft’s nose down as it interpreted the faulty data as a high angle of attack.
The crew quickly realised there was a flight control problem and attempted to counteract the nose-down inputs by manually trimming the stabiliser, and pulling back on the yoke.
However, MCAS reactivated repeatedly, which made recovery even more difficult. On top of this, there were also multiple other warnings that added to the workload of the pilots.
The crew experienced a persistent stick shaker (like with the previous accident), and conflicting airspeed and altitude readings on the left and right hand side’s displays, which massively increased the crew’s workload.
Despite their efforts to regain control, the repetitive MCAS activations and the high forces required to pull back on the control column overwhelmed the flight crew.

Six minutes after takeoff, the aircraft entered a steep dive and crashed into a field 28 nautical miles southeast of the airport at high speed. All 157 people onboard lost their lives.
We’ve covered these kinds of instances where the autopilot does things we don’t quite understand before (automation surprise) here:
Timeline of Boeing 737 MAX Events
Quite a lot of things happened in a short period of time (well, relatively short for how long these things tend to take). So let’s have a look at an overview of what happened and when, from beginning to end ⤵️

What is MCAS?
MCAS is an automated flight control system designed to enhance pitch stability during specific flight conditions, and to make the Boeing 737 MAX behave more like previous generation 737 aircraft.
The Boeing 737 MAX has some crucial differences with the models that came before it, so Boeing’s challenge was to make the plane feel similar for pilots that were already rated on the older variations.
What changes are most note-worthy here?
Engines: New LEAP-1B Engines, which are larger, more fuel efficient, and quieter.
These new engines required a few design changes in other parts of the aircraft.
Why?
Well, the LEAP-1B engines are quite a bit larger and heavier than the older engines of the B737 NG.

This meant that they had to be mounted slightly higher and further forward on the wings, like this:

This change meant that the aerodynamic balance of the aircraft behaved slightly differently. With these larger engines in a slightly different place, the B737 MAX was prone to creating a nose-up pitching moment at high angles of attack.
This is why MCAS was installed ✅
It worked by using the data from the AOA sensor. To counteract an increasing pitch-up moment, it would push the nose down by sending a signal to the vertical stabilisers, like this:

This was mainly done because Boeing wanted to make the plane feel similar to the older versions of the Boeing 737. This way, no simulator training would be required.
Unfortunately, MCAS is a classic example of shit in = shit out. Flawed input data creates flawed output actions, which is what we’re going to get into for both accidents ⤵️
Boeing 737 MAX Investigation Findings
The investigation findings were very similar between the two accidents. Let’s go over the main ones:
Lion Air Flight 610 Findings
The Komite Nasional Keselamatan Transportasi (KNKT) (The investigation branch in Indonesia, listed the following nine findings for the Lion Air crash:
1️⃣ Boeing assumed pilots would respond to malfunctions in specific ways. These assumptions turned out to be wrong.
2️⃣ MCAS relied on only one AOA sensor, and this was considered acceptable during the certification process by the FAA, despite the risks
3️⃣ MCAS dependence on only one AOA sensor meant that it was highly susceptible to faulty readings
4️⃣ Pilots were not trained on how MCAS worked, or how to handle the system in case it malfunctioned
5️⃣ Flight crews were not able to identify a faulty AOA sensor due to the lack of alerting
6️⃣ The replacement AOA sensor installed on the aircraft had been mis-calibrated during repair, but this error was not detected before flight
7️⃣ Investigators could not confirm whether or not the AOA sensor installation was tested properly
8️⃣ The issues that were present on the previous flight were not documented properly, leaving the maintenance crew and flight crew uninformed
9️⃣ The pilots faced multiple warnings, repeated MCAS activations, and high workload due to ATC communications. This made the already difficult situation even harder for the flight crew to manage the situation properly, leading to poor CRM.
Ethiopian Airlines Flight 302 Findings
The Aircraft Accident Investigation Bureau (AAIB) of Ethiopia listed the following ten contributing factors to the Ethiopian Airlines crash:
1️⃣ MCAS design relied on a single AOA sensor, making it vulnerable to erroneous input
2️⃣ During the design process, Boeing failed to consider the potential for uncommanded activation of MCAS
3️⃣ Boeing did not evaluate all the potential alerts and indications that could be present during uncommanded MCAS inputs.
4️⃣ The MCAS contribution to cumulative AOA effects were not assessed
5️⃣ The combined effect of alerts and indications that impact the pilot’s ability to recognise what was going on, were not evaluated by Boeing
6️⃣ The absence of an AOA DISAGREE warning on the Primary Flight Display (PFD) (this was an optional extra when you ordered a 737 MAX)
7️⃣ The flight crew’s differences training did not cover MCAS at all
8️⃣ Failure by Boeing to design simulator training for pilots for critical components like MCAS
9️⃣ Boeing failed to provide procedures regarding MCAS operations
🔟 Boeing failed to address some safety critical questions that were raised by the airline, which would have cleared up some confusion amongst pilots on task prioritisation
Similar Findings
As you can see, there is a lot of overlap between these two reports. The ones that jump out are:
🔸 MCAS’s reliance on only one sensor
🔸 The lack of pilot training for this critical system
🔸 The lack of a warning in case of a faulty AOA sensor
What can we Learn From the Boeing 737 MAX Crashes?
From these two independent investigations, we can learn a lot of lessons that can apply to many other aircraft, situations, and aviation in general.
The main ones are:
🔸 Critical system design MUST account for failures
🔸 Comprehensive pilot training is essential for critical systems
🔸 Clear and accurate manuals are vital, pilots might need guidance in the heat of the moment
🔸 Regulatory oversight for aircraft certification must be rigorous to avoid risks not being identified
🔸 Aircraft manufacturers must foster a clear and responsive communication channel with operators
🔸 During high workload, CRM is one of the biggest tools we have as pilots. It can make the difference between crashing and just another flight
🔸 Both manufacturers and regulators (in addition to operators and pilots) must take responsibility and accountability for ensuring safety across the industry
Conclusion
The Boeing 737 MAX crisis highlights some serious lessons for aviation safety. The crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302 showed that key issues in the aircraft design, pilot training, and safety systems were missed ❌
A faulty Angle of Attack sensor, the lack of proper training on the MCAS system, and missing safety warnings all played a role in the tragedies.
From these accidents, one of the biggest take-aways is that aircraft systems must be designed to handle failures, pilots need proper training on how to deal with unexpected problems, and safety information must be clear and easy to follow.
It’s also important that manufacturers, regulators, and airlines work closely together to make sure planes are safe and pilots have the right support when things go wrong.
The aftermath of the 737 MAX crashes led to changes in the industry, but it’s still really important that we keep pushing for better safety practices, to make sure that everyone involved takes responsibility for keeping the industry safe 🔒
Resources
Ethiopian Airlines Investigation Report
We don’t publish all our Notes from the Cockpit (like this one) publicly, some are shared only by email. Get the next one sent straight to your inbox ⤵️
3 Comments
Anonymous · February 10, 2025 at 5:29 PM
Great article buddy
Joanna Gordon · February 9, 2025 at 7:27 AM
A great article, simplified and informative. Thought provoking. Nice one Jop.
Jop Dingemans · February 9, 2025 at 7:28 AM
Thank you Joanna!