I’ve had days where departing to a patient from a busy airport in the middle of summer feels like you’re trying to thread a needle (while someone keeps nudging your elbow) 👀

The airspace is saturated, everyone’s talking, and you’re trying to focus on the next event.

On days like that, you’re even more aware of all the potential threats around you.

That’s why we’re always glad to have TCAS II for situational awareness, even in perfect weather.

Because you can be flying in clear VMC… and still end up on a collision course with something you haven’t been able to spot as a crew.

Yes, we have training & procedures.
Yes, we rely on see-and-avoid.

But… sometimes that’s just not enough.

This fatal accident is a sobering reminder of how quickly a shared mental picture can break down, and how fragile “see and avoid” can be.

Two experienced pilots, great weather, and a routine operation.

And they ended up in the exact same piece of sky.

How does that happen?

Let’s take a closer look at the collision over the Gold Coast ⬇️

💥 Accident Overview

It’s January 2nd, 2023. We have two helicopters, both EC130 B4’s (let’s call them helicopter 1 & 2 for simplicity), operating from their Sea World base in the Gold Coast, Australia:

Helicopter 1 (VH- XH9) was commencing a 10 minute scenic flight from helipad 3 of their operating base:

Helicopter 2 (VH-XKQ) was operating from the park helipad, about 220 meters from helipad 3:

Helicopter 1 (VH-XH9) lifts first.

At 13:51, it departs helipad 3 for a short scenic flight with five passengers on board.

It’s a standard departure, and flies outbound over the water, climbs, and follows the usual coastal route:

So far, there’s nothing out of the ordinary.

A couple of minutes later, helicopter 2 is being loaded on the park helipad. The passengers get in, the doors close, and the checks are completed.

The ground crew look around, and give the thumbs up.

From their perspective: all clear!

Meanwhile, helicopter 1 is already on its way back as it tracks southbound.

The pilot spots helicopter 2 on the ground while passengers are boarding, and the doors are closing.

No immediate threat, and the expectation is:

If that helicopter moves, it’ll call.

So no additional calls are made. This is where things start to go sideways.

Helicopter 1 does make an inbound call.

But the pilot of helicopter 2 is still loading passengers at the time. He’s heads-down, distracted, and the call doesn’t register.

Later, helicopter 2 is expected to make a taxi or departure call.

But due to a fault in its radio system, it likely wasn’t transmitted.

So we now have a situation that, from each flight’s perspective:

🔸 One pilot thinks the other has heard them
🔸 The other is waiting for a call that never comes

And neither realises this communication loop is broken.

Helicopter 2 lifts off and begins climbing out to the south.

At the same time, Helicopter 1 is descending back towards the heliport.

Both pilots are pretty busy:

🔸 Pilot 2 is managing the departure and traffic below
🔸 Pilot 1 is adjusting its path to pass behind a vessel ahead

At 13:55:59, a passenger on board Helicopter 1 sees the other helicopter getting closer.

They try to warn the pilot, and tap him on the shoulder. Unfortunately it’s too late. They collide:

You can see the last few seconds here (please be warned, it’s unpleasant footage): ⬇️

At 13:56:06, just 130 feet above the ground, the two helicopters collide.

Helicopter 2 is immediately uncontrollable and crashes onto a sandbar.

Helicopter 1 is badly damaged, but the pilot manages to maintain control.

The report states:

“The acrylic windscreens and composite structure of the front of XH9 were shattered as the main rotor blades of XKQ passed through the cabin.”

And:

“The pilot and passengers in XH9 were immediately peppered with penetrating fragments”

They land the helicopter on the sandbar less than 30 seconds later.

For Helicopter 1, the pilot and 2 passengers had serious injuries, while the 3 other passengers had minor injuries.

For Helicopter 2, the pilot and 3 passengers were fatally injured, while the 3 other passengers were seriously injured.

Join 1,510 other subscribers

🔍 What Caused this Collision?

Like many accidents, this accident wasn’t just caused by one error or threat. There are a few relevant factors. Let’s unpack ⤵️

Loss of Shared Situational Awareness

🔸 Neither pilot’s mental model reflected reality at the time of the accident

🔸 Radio calls were missed, not received, or not transmitted (likely) due to issues with radio equipment

🔸 Passengers were a distraction on the ground that competed with safety critical information on the radio

🔸 Ground crew did not aid this situation where there was no shared mental model between the two pilots

Limitations of the See and Avoid Technique

The problem with see and avoid is that it’s reactive in nature.

You scan outside ➡️ spot an aircraft ➡️ assess flight paths ➡️ take avoiding action if required.

Reactive! 🔁

The best safety procedures are pro-active in the way they’re designed.

At some point, humans will make mistakes or cause problems in the chain. That acknowledgement requires a system that accounts for that.

While see and avoid is a proven technique, it heavily relies on us NOT to screw up a continuous + repetitive task (i.e actually being able to spot traffic).

The Route Design Created a Planned Conflict Point

If a system or procedure is designed where points of conflict are baked in, you’re allowing risk into the operation before we’ve even started.

The approach onto helipad 3, and the departure from the park helipad, had a known point of conflict.

This comes back to the earlier point about see and avoid.

Yes, you could decide to rely on that, but should you want to (from a system’s perspective)? Is it a good idea to require reactive safety mechanisms to keep the operation running smoothly?

Ineffective Safety System and Change Management

The operation was going through a bit of change:

🔸 new helicopters
🔸 new helipads
🔸 concurrent operations

That’s a few examples of change where, without a robust safety management strategy, things can go wrong quickly.

A lot of us have probably experienced this at one point or another in our careers. There’s a new type, destination, or capability that’s introduced, but you don’t quite feel adequately prepared or trained to deal with it.

The change or situation itself might not be the problem, it’s the assumption that our interaction with those things “will be fine” without any issues or gaps down the line.

Side-note: The ATSB mentions traces of cocaine found on one of the pilots. While this is certainly noteworthy, we have not listed it here, as the ATSB mentions:

“The low concentrations detected suggest that the use was not likely to have been within 24 hours of the incident. The low concentrations also showed it was unlikely that the pilot’s psychomotor skills were impaired during the accident flight.”

💡 What can we Learn From This Accident?

There are quite a few lessons we can learn from this horrible accident ⤵️

See and Avoid is a Last Resort, not a Primary Strategy

The entire VFR industry (by definition) relies on see and avoid. Yes, it’s effective – but’s is also reactive (see ➡️ then avoid).

It is meant as a last resort, that builds on top of planning and procedures that do the majority of the lifting here. Think of circuits, ATC, and SOP’s that are the foundation that see and avoids builds on.

If you rely on see and avoid as the main safety barrier for collisions, especially in smaller spaces with known conflict points, you’re playing with odds.

A great research report by the ATSB (Hobbs, 1991) summarises this well:

“Not only does the whole process take valuable time, but human factors at various stages in the process can reduce the chance that a threat aircraft will be seen and successfully evaded. These human factors are not ‘errors’ nor are they signs of ‘poor airmanship’. They are limitations of the human visual and information processing system which are present to various degrees in all pilots.”

Procedures Should be Designed with The Assumption Humans Will Eventually Screw Up

If you want to know if the system you operate in has a pro-active perspective on safety, ask yourself:

“Does the system I am in acknowledge and account for the fact that humans inside the system will eventually make a mistake?”

It took us years in the industry to get to a point where “human error” changed from being a “bad trait” to a human reality that threat & error management is designed for. We covered this here:

We went from “making mistakes = bad pilot”

To:

Making mistakes = inevitable and requires mitigations for when it happens.

If procedures demand that pilots make no mistake, it’s worth questioning if that’s a reliable long term strategy to safeguard a dynamic operation.

Effective Change Management is Easier Said than Done

The operator introduced quite a few changes to the operation, like:

🔸 simultaneous operations from two different helipads
🔸 new EC130 helicopters
🔸 a new conflict point was the result of simultaneous operations that included the park helipad

The investigation stated:

“Sea World Helicopters’ change management process, conducted prior to reopening the park pad, did not encompass the impact of the change on the operator’s existing scenic flight operations. Crucially, the flight paths and the conflict point they created were not formally examined, therefore limitations of the operator’s controls for that location were not identified.”

The problem with change is that it makes errors and mistakes more likely. That’s just the nature of humans requiring mental resources for adopting to new situations.

Careful change management is easier said than done, it requires acknowledgement of the fact that change makes a system more fragile, and pro-actively goes after how to mitigate that threat.

Assumptions are Persistent Threats in Aviation

I’ve learnt this the hard way: assumptions are threats just by themselves, especially in combination with distraction (like the passengers were here for helicopter 2 while on the ground).

Even if the assumption you’re making happens to be correct in the moment, it can still cause issues down the chain.

“Trust, but verify” comes to mind.

In this particular accident, both pilots made assumptions that “X can’t be true because of Y”. But the problem then becomes: if Y is incorrect, so is X.

When we unravel accident chains, it’s still very common to see a pattern of assumptions coming back later in the form of a problem.

In this case, a verification of what the take-off situation of the 2nd helicopter was, would’ve helped the pilot of helicopter 1 to build a more accurate picture of what was happening around the heliport.

💭 Conclusion

Two trained pilots, both doing what made sense from where they were sitting… and still ending up in the exact same bit of sky.

This is partly a system problem, and a reminder for all of us that a lot of what we rely on day-to-day; radio calls, see-and-avoid, timing, works well when everything lines up the way we expect it to.

When it doesn’t, things can unravel quickly.

The lessons are: question your assumptions, double check your mental model of what is going on around you, and when a system has risk built into it: be even more cautious.

You can find the ATSB report here.

Join 1,510 other subscribers
Categories: Why Spotlights

Jop Dingemans

Founder @ Pilots Who Ask Why 🎯 Mastering Aviation - One Question at a Time | AW169 Helicopter Pilot | Aerospace Engineer | Flight Instructor

0 Comments

Leave a Reply

Discover more from Pilots Who Ask Why

Subscribe now to keep reading and get access to the full archive.

Continue reading