On the night of January 29, 2025, two professional crews flew in night VMC towards one of the most controlled pieces of airspace in the world 🌎

One was a CRJ700 inbound to Ronald Reagan Washington National Airport.

The other was a US Army Black Hawk on a routine NVG evaluation flight.

They collided over the Potomac River in Washington (0.5 miles southeast of the airport), killing everyone (67 people in total) on board of both aircraft.

When this happened, a lot of people in the industry just could not understand how something this catastrophic could happen in such tightly controlled airspace.

Let’s unpack what happened, and what we can learn from it.

💥 Accident Overview

The CRJ700 (Registration N709PS, operated by PSA Airlines as American Airlines flight 5342), is inbound to Washington National.

The airport is located in the centre of Washington itself, directly west of the Potomac River.

This flight is a standard scheduled flight. Nothing unusual on departure, cruise, or during the initial descent.

At the same time, a US Army UH-60L Black Hawk (under the callsign PAT25) is out on a night vision goggle evaluation flight.

Both aircraft are flying in night VMC, although the CRJ is flying under IFR.

The Black Hawk departs Davison Army Airfield, does a couple of landings in Virginia and Maryland, and then turns south toward Washington.

Washington Collision

They’re cleared to transition DCA airspace via Helicopter Routes 1 and 4: the Potomac corridor.

They join Route 1 near Cabin John and track southbound. Their route crosses some visual landmarks like Key Bridge, Memorial Bridge, The Tidal Basin, and Hains Point, before continuing on route 4.

Meanwhile, the CRJ is planning a visual approach to runway 01, but is asked by ATC if they can accept runway 33 instead, which the flight crew confirms, and starts setting up for the approach.

Washington Collision

Now here’s where their routes start to converge.

The helicopter is transitioning from Route 1 to Route 4, while the CRJ is circling to line up for Runway 33.

Their paths are starting to angle toward each other.

ATC tells the helicopter about the CRJ, which is about 6.5 nm south of the helicopter.

Washington Collision

It was night, and the airplane’s lights would have been visible, but mixed in with the lights of several other aircraft also approaching Washington from the south, as well as city lights.

This is what the Black Hawk pilots saw based on a reconstructed image from the NTSB:

Credit: NTSB

The instructor pilot in the Black Hawk says they have the traffic in sight and requests visual separation, which ATC approves.

About a minute and a half later, the two aircraft were getting much closer, near the Runway 33 approach path.

This is what the CRJ crew would have seen, according to another NTSB reconstructed image:

Credit: NTSB

The controller checked again and told the helicopter to pass behind the airplane.

But at that exact moment, one of the helicopter pilots pressed the radio transmit button for 0.8 seconds. That short transmission blocked part of the controller’s message.

The helicopter crew did not hear the words “pass behind.”

They again said they had the airplane in sight and continued south along Route 4.

The CRJ crew continued the final approach to Runway 33.

A few seconds later, at about 278 feet above the Potomac River, the two aircraft collided, resulting in the death of everyone onboard.

Join 1,507 other subscribers

🔍 Investigation Findings

The investigation had over 70 published findings, here is a summary of the most relevant ones:

Crowded Route Design and Missed Warnings

At the centre of this accident was something structural: Route 4 sat uncomfortably close to the Runway 33 approach path, an airport that is already running at very high capacity for its infrastructure.

On top of that, this wasn’t new.

The risk had shown up in data before. Controllers had raised concerns, but the route wasn’t fundamentally reevaluated according to the NTSB. It mentions:

“The Federal Aviation Administration Air Traffic Organization was made aware of, and had multiple opportunities to identify, the risk of a midair collision between airplanes and helicopters at Ronald Reagan Washington National Airport; however, their data analysis, safety assurance, and risk assessment processes failed to recognize and mitigate that risk.”

The system allowed helicopters and airliners to operate in tight proximity without additional procedural safeguards.

Normalising Visual Separation

The DCA operation leaned heavily on pilot-applied visual separation to keep traffic moving efficiently (like many airports do to be fair).

That works when everything lines up. But at night, in complex airspace, with multiple aircraft and high workload, see-and-avoid ILS crossings have real limits (I’ve done it myself many times at Gatwick Airport).

Over time, visual separation became the default tool for managing mixed helicopter and fixed-wing traffic.

So what’s the problem then?

It slowly becoming the primary safety barrier, instead of the last one!

Shared Understanding Wasn’t There

Published helicopter route information didn’t give everyone the same mental picture. Fixed-wing charts didn’t clearly show intersecting helicopter routes.

Also, many Army pilots believed that flying at or below the published route altitude inherently kept them separated from jet traffic.

But it didn’t.

The NTSB also noted:

“The Army did not have a flight safety data monitoring program for helicopters, and as a result, was unaware of routine altitude exceedances and related risks in the Ronald Reagan Washington National Airport terminal area.”

That gap in shared understanding meant that crews were operating with different assumptions about the protection those altitudes actually provided, not to mention barometric errors we will cover further down.

The Available Tools and Systems at the Army

The NTSB mentions a few findings that came as a direct result of the tools available at the army, as well as the overall culture.

It notes:

“The Army’s safety reporting systems for pilots were not well utilized and did not provide the organization with information about close encounters between Army helicopters and other aircraft that were later found to have occurred frequently.”

As well as:

“The Army’s process for allocating resources to aviation safety management did not ensure the development of a robust safety management system for helicopter operations in the Washington, DC area.”

It also states:

“The Army did not ensure that helicopter pilots were adequately informed about the effects of allowable error tolerances in barometric altimeters.”

Due to this error tolerance, the helicopter was flying above the published maximum altitude.

Unfortunately, these are systemic issues that require time and a lot of focus to actively resolve.

Visual Separation Wasn’t Effectively Applied

The helicopter crew reported the traffic in sight. But degraded radio reception meant they didn’t fully understand the CRJ was circling to runway 33.

That reinforced an expectation that the airplane wasn’t a factor.

After all, assumptions / expectations can shape how we scan or perceive information.

The investigation found they requested visual separation without positively identifying the specific aircraft that would become the conflict.

This, combined with the barometric altitude issue discussed earlier, contributed to the outcome here.

Workload in the Tower

That night, helicopter control and local control were combined during elevated traffic volume.

The investigation found that workload degraded controller performance and situational awareness.

There was no structured risk assessment process in place to flag converging routes in real time.

Traffic advisories were incomplete, and no safety alerts were issued.

When bandwidth narrows, priorities can shift! Sometimes this is subtle, but cases like this show it can have catastrophic outcomes.

Technology Didn’t Bridge the Gap

Both crews were flying at night, against a backdrop of city lights, with limited relative motion between aircraft, one of the hardest visual scenarios (as we’ve seen in the images from the NTSB earlier).

The helicopter had no integrated collision avoidance system, which, if installed, would have providing an alert for the impending collision.

The CRJ’s TCAS worked as designed and, due to the aircraft’s low-altitude only gave a traffic advisory, not a resolution advisory (a climb or descent instruction to avoid the conflict). The NTSB noted, however, that if the next generation of ACAS had been installed, the crew would have received a traffic advisory 8 seconds earlier.

More advanced systems could have helped, but they just weren’t in place.

💡 What can we Learn From This?

So what are the lessons here? Let’s list them:

1️⃣ Designing systems without enough margin can eventually bite you

If you put a helicopter route right next to a busy runway approach, you’re building risk into the system.

Route 4 ran very close to runway 33’s approach path. Everyone kind of “made it work” for years… until one night it didn’t.

The big lesson: If a system depends on everything going perfectly every time, it’s not a safe system.

2️⃣ “See and avoid” sounds good, until it’s dark and busy

The whole operation leaned heavily on visual separation.

But:

🔸 It was night
🔸 City lights were everywhere
🔸 The helicopter had low relative motion
🔸 The jet crew were flying a circling approach
🔸 The helicopter crew were using NVGs

That’s a lot of workload and visual clutter.

When we flew night crossings underneath London Gatwick’s ILS, the amount of lights everywhere made the night vision goggles a lot less effective, and actually made spotting traffic a lot more difficult. It always required active participation and confirmation from both crew members.

Not all lights are captured by NVG’s either, which adds to the risk here unless you flip them up and fly unaided (intermittently or fully).

Humans are not great at spotting small moving lights against complex backgrounds. Both crews believed they had the situation under control, but they didn’t.

3️⃣ Assumptions can affect what we actually see

The helicopter crew thought they were separated by altitude. Many Army pilots believed staying at or below the route altitude meant they were inherently safe.

But:

🔸 They were actually above the published max altitude
🔸 They didn’t get the “pass behind” call due to radio blocking
🔸 They believed they had the traffic in sight, without positively identifying it

Once your brain decides something isn’t a threat, your scan changes if you’re not cross checking your mental model.

4️⃣ Safety rarely collapses suddenly, it often erodes gradually

The tower controller was working combined positions during high traffic volume. That increases cognitive load.

When workload rises:

🔸 Priorities get shuffled
🔸 Advisories become incomplete
🔸 Conflict alerts are missed or delayed
🔸 Risk assessment becomes reactive instead of proactive

Nothing catastrophic by itself, but all these small degradations add up, until they hit safety where it hurts.

5️⃣ Relying on humans or technology is a choice, and will eventually get tested.

🔸 The helicopter wasn’t using integrated collision avoidance.
🔸 The EFB could have shown ADS-B traffic, but wasn’t monitored.
🔸 The jet’s TCAS worked as designed, but was limited at low altitude.
🔸 More advanced systems (ACAS X/Xr) could have helped here, simulations showed it would have significantly reduced the risk of collision.

Some of these are not mandated, and even if they were: technology is only as good as the implementation and SOP’s around it.

6️⃣ Accidents are usually a string of variables lining up, just like this time

This wasn’t one big mistake that caused the collision, it were a few things that lined up “perfectly”.

🔸 Design decisions
🔸 Cultural normalisation
🔸 High workload
🔸 Assumptions
🔸 Partial tech adoption
🔸 Data that’s not acted on

In hindsight, the system was running on tight margins in many ways for a long time, and this was the outcome of those conditions.

💭 Conclusion

This accident happened because a complex system was running close to its limits for a long time.

➡️ Routes that were placed too close together.
➡️ Visual separation used as the primary tool.
➡️ High workload in the tower.
➡️ Assumptions on both sides.
➡️ Technology that almost helped, but not quite enough.

None of these alone were catastrophic.

But together, they caused a tragic accident that we can learn a lot from, for years to come.

You can find the full NTSB report here.

Join 1,507 other subscribers
Categories: Why Spotlights

Jop Dingemans

Founder @ Pilots Who Ask Why 🎯 Mastering Aviation - One Question at a Time | AW169 Helicopter Pilot | Aerospace Engineer | Flight Instructor

5 Comments

Anonymous · March 2, 2026 at 10:43 AM

Accident is never due to one factor ..we try always to break that chain.. but it’s not always the case

Jop Dingemans · March 2, 2026 at 9:42 AM

Thank you!

Anonymous · March 1, 2026 at 9:11 AM

Wonderful findings. Thanks

Paul Johannessen · March 1, 2026 at 7:35 AM

Txs for sharing.
Very interesting to learn more about this terrible accident.

Best regards
Paul J

    Jop Dingemans · March 1, 2026 at 8:15 AM

    Thanks for tuning in Paul, feedback on how we can improve is always welcome 👍🏼

Leave a Reply

Discover more from Pilots Who Ask Why

Subscribe now to keep reading and get access to the full archive.

Continue reading