We have a Boeing 747, a full runway, and engines set for take-off power 🛫
Yet… it didn’t take off. It crashed, and turned into a fireball, killing everyone on board.
No technical failures or weather issues.
So what happened?
Just… 1 wrong number.
The crew used the wrong takeoff weight, by over 100 tonnes.
That one input changed everything.
This meant the performance calculation produced the:
🔸 Wrong V1
🔸 Wrong Vrotate
Which resulted in ⬇️
🔸 Not enough thrust
🔸 Rotating too early
No amount of procedures can overcome physics.
Whether it’s an FMS, a manual performance chart, or a phone call:
Give a process the wrong input, and it’ll give you the wrong output, perfectly!
The aircraft didn’t fail, it worked exactly as designed. Yet here we are.
Let’s take a look at what we can learn from this.
💥 Accident Overview
It’s 2004, we are at Halifax International Airport in Canada.
MK Airlines flight 1602 is a B747-200 (cargo) and is bound for Zaragoza in Spain, after it completed two previous sectors:

This was the third sector for a long duty sequence for the crew, who were likely quite fatigued at this point.
Before departure, the crew used the Boeing Laptop Tool (BLT) to calculate their target takeoff speeds and thrust settings.
Instead of entering the aircraft’s takeoff weight of 353,000 kg, they likely entered 240,000 kg (from the previous sector from Bradley). A difference of 113,000 kg.
Because of this error, the crew’s perception of the aircraft’s weight was far lighter than it actually was, and therefore V1, Vrotate, and V2 speeds, as well as the required thrust setting were too low.
While SOP required an independent cross check of these figures, it’s likely that this did not happen.
The crew started the departure, and started the rotation at Vrotate.
But because the aircraft was so much heavier than these figures were based on, it struggled to lift. The tail hit the runway multiple times as the crew increased pitch to get the aircraft airborne.

The aircraft overran the runway, hit terrain, and crashed. A post-impact fire started and all seven crew members on board were killed.

We don’t publish all our Notes from the Cockpit (like this one) publicly, some are shared only by email. Get the next one sent straight to your inbox ⤵️
🔍 What Caused This Accident?
The causal chain here looked like this:

The question is, how did we get here?
The investigation team highlighted eight main findings, but we’ll summarise the main ones:
The Lack of a Cross-check on Performance Data
One of the most crucial links in this entire accident chain was the lack of a thorough cross check of the performance data.
The SOPs stated it should have been done, but the reality was different, as per the report:
“The pilots of MKA1602 did not carry out the gross error check in accordance with the company’s standard operating procedures (SOPs), and the incorrect take-off performance data were not detected.”
The wrong data made its way through the calculation, into the takeoff briefing, and finally in the execution, completely unchallenged.
Crew Fatigue, Caused by a Lacking Fatigue Management Culture
A maximum flight duty time increase for the crew from 20 to 24 hours increased the risk of fatigue.
Due to a high turnover and a lack of crew, more pressure was put on flight crews to ‘keep the ship going’.
There were no company rules on maximum duty periods for ground staff either, like engineers and load masters. The report states:
“Crew fatigue likely increased the probability of error during calculation of the take-off performance data, and degraded the flight crew’s ability to detect this error.“
And:
“Crew fatigue, combined with the dark take-off environment, likely contributed to a loss of situational awareness during the take-off roll. Consequently, the crew did not recognise the inadequate take-off performance until the aircraft was beyond the point where the take-off could be safely conducted or safely abandoned.”
Lack of Formal Training on Performance Software
The investigation team highlighted:
“The company did not have a formal training and testing program on the BLT, and it is likely that the user of the BLT in this occurrence was not fully conversant with the software.”
The tools we use are only as good as the training we receive to use them.
In fact, you might have experienced yourself that more tools without training is actually a threat, as it can add complexity into a system where the human is no longer aware of process itself.
💡 What can we Learn From this?
Let’s strip it down to what we can take away from this horrible accident.
Incorrect Performance Data Remains a Persistent Threat
While the performance data here wasn’t calculated by the FMS but a separate laptop, most modern aircraft nowadays all perform these calculations with the FMS.
But even with more automation involved, we’re not immune to errors or mistakes.
The ‘shit in = shit out’ principle still applies, and errors can quickly escalate if not caught by our colleagues or the procedures we follow.
Catching an Error Early Gives you Options and Time
The earlier you catch errors or mistakes, the more time you have to rectify the situation. This sounds obvious, but it can’t be overstated in a situation that’s so “on the nose” as this one.
Yes there was pressure to depart on time, and it’s a trap that many of us can fall into. But before the takeoff roll, you have options. During: very few!
Trust, But Verify
Trust, but verify: a lesson we’ve shared here:
Yes, we should trust colleagues and professionals around us to aim for the same goals. However, a healthy dose of scepticism is a requirement if you want to catch errors before they escalate.
A basic gross error check can make all the difference, and keep in mind that if you both look at the same tables, from the same source, in the same way: that’s not a cross check, that’s just a formality.
Fatigue Management Isn’t Some Buzzword
More and more accident reports reference fatigue. Safety culture is incorporating fatigue better over time, but there is still a lot of work to be done.
Depending on the type of industry you’re in, Flight Time Limitations (FTL’s) are still treated as targets by some operators.
In addition to that, the FTL schemes themselves are still actively being developed, as the literature and science keeps discovering more ways to deal with fatigue in a more safe way.
We’ve covered the ins and outs of fatigue, and how to manage it here:
💭 Conclusion
It might be tempting to think ‘how could they possibly not detect this error’?
But the fact is, fatigue gets to all of us. We all make mistakes, and performance data has the power to really screw with flight safety if it’s not done correctly.
No one shows up to work thinking ‘let me screw up the figures today’.
Most of us can probably recall moments where we were glad someone corrected us, and vice versa.
Stay vigilant, and if things get serious: verify the information you rely on.
You can find the investigation report here.

We don’t publish all our Notes from the Cockpit (like this one) publicly, some are shared only by email. Get the next one sent straight to your inbox ⤵️
0 Comments