Lessons Learned from Last Year’s Major Breaches

Lessons Learned from Last Years Major Breaches - Inavate Consulting

If last year taught us anything, it’s that major cyber breaches are no longer shocking, they’re familiar, predictable, even.

Throughout the year, we covered a number of high‑profile incidents in our blogs.

Different organisations, sectors, and threat actors, but similar themes kept emerging. The operational disruption affecting UK retailers such as Marks & Spencer and Co-op, the aviation supply chain issues that grounded flights and disrupted passengers, and widespread service interruptions linked to major cloud platforms all demonstrated how sector differences matter far less than systemic weaknesses.

While breach analysis often focus on what attackers did, the more valuable question for CISOs is what should be done differently as a result.

The Same Attacks, the Same Weak Spots

One of the clearest patterns across last year’s breaches was how rarely attackers needed anything particularly novel. In many cases, the initial access points were worryingly ordinary: compromised credentials, weak access controls, over privileged accounts, or unaddressed third party weaknesses.

We saw this clearly not only in retail incidents such as those affecting M&S and Co-op, but also in aviation related disruptions where supplier system weaknesses had cascading operational consequences. In many incidents disruption was triggered, not by a single dramatic failure, but by small, accumulated weaknesses across interconnected systems – something we explored in our blog Grounded by a Glitch: What the Airport Chaos Teaches Us About Supply Chain Cybersecurity.

There’s a tendency, when incidents escalate publicly, to describe attackers as “highly sophisticated”. Sometimes that’s true. But sophistication doesn’t negate the fact that many breaches still rely on gaps in basic security hygiene. Controls that were once implemented, documented, and signed off had quietly eroded over time.

What stood out wasn’t a lack of knowledge. Everyone involved understood what good security looked like. The problem was ownership. Fundamental controls – identity, access, monitoring – often existed on paper but lacked active oversight.

If there’s one clear lesson here, it’s that foundational controls don’t stay effective just because they once were. They need constant attention, particularly as organisations change around them.

The M&S and Co-op incidents reinforce exactly this theme that large, well known organisations can be compromised through non novel means, with business impact driven by systemic weaknesses rather than cutting edge attack techniques.

Detection Still Lags Behind Belief

Another recurring theme was the length of time attackers were able to operate undetected. In several incidents we examined, the breach itself wasn’t catastrophic at the point of entry. The damage came later, during the days or weeks when nobody realised anything was wrong.

This same visibility gap appears in large‑scale outages and service failures too. In our blog When the Cloud Fails: A Wake‑Up Call for Business Leaders, we highlighted how reliance on complex platforms can mask early warning signs. The major outage affecting Amazon Web Services disrupted operations for thousands of companies globally, demonstrating how confidence in resilient infrastructure can obscure detection blind spots until disruption becomes unavoidable.

In retail and aviation incidents alike, early indicators were present but either not escalated quickly enough or not correlated across systems. Many organisations invest heavily in tools designed to prevent cyberattacks. But what last year’s breaches highlighted was that far fewer organisations are confident they would spot an attack while it is happening, or recognise quickly when something has gone wrong. In several cases, controls were in place, but warning signs were missed, alerts were not acted on in time, and attackers operated undetected for long enough to cause real damage.

The uncomfortable reality is that breach impact is often determined less by whether an attack happens, and more by how quickly it’s recognised and contained. That gap remains one of the industry’s most persistent blind spots.

Third Party Risk Is No Longer Abstract

Supply chain and third‑party incidents continued to ripple across multiple sectors last year, amplifying the impact far beyond the initially compromised organisation. What made these incidents particularly striking wasn’t that suppliers were involved – that’s not new – but how thin the assurance often proved to be.

As we saw during major operational disruptions affecting transport and travel, a single failure in a shared service or supplier can cascade rapidly. The airport chaos we discussed earlier is a useful parallel here, not because it was caused by a breach, but because it exposed how dependent many organisations are on systems they do not fully control.

Across several breaches, supplier assessments existed, but they rarely focused effort where it mattered most. Critical suppliers were assessed in the same way as low‑risk ones. Questionnaires were completed, but remediation actions weren’t always verified. Over time, assurances became assumptions.

The result was a familiar pattern: organisations believed risk was being managed, right up until it wasn’t.

The lesson here is not that third‑party risk is unmanageable; it’s that treating all suppliers equally is a false economy. Real resilience comes from prioritisation, challenge, and a willingness to apply pressure where dependencies are highest.

Compliance Was Present - Curiosity Wasn’t

In several incidents last year, organisations were not disregarding standards or best practice. Policies were documented, audits had taken place, and certifications were in place. The gap was not compliance, but a lack of ongoing challenge; particularly when environments changed, services evolved, or ownership became blurred.

This was especially apparent in incidents involving shared platforms and cloud services, where disruption in one environment quickly affected many others at once such as with  major outage across Amazon Web Services (AWS), which disrupted operations for thousands of companies globally. Reliance on certified platforms, whether in cloud infrastructure, aviation systems, or retail technology estates, can create a quiet shift in accountability, where critical questions around resilience, visibility, and ownership are left untested. When failures occurred, many organisations discovered that responsibility was unclear and oversight weaker than expected, despite operating within what were assumed to be well‑governed environments.

Frameworks such as ISO 27001 are highly effective at setting clear expectations and providing a strong governance foundation. Their real value is realised when organisations actively apply professional judgement alongside the framework, using it to continually reassess assumptions, review exceptions, and ensure controls evolve in line with changing risks. In practice, this means regularly testing whether workarounds are still justified, whether accepted risks remain valid, and whether controls continue to reflect how the organisation actually operates. When used this way, ISO 27001 becomes a dynamic management system that supports informed decision‑making and sustained resilience, rather than a static set of requirements.

Risk Was Known

One of the clearest lessons from last year’s breaches is that the risks were often already known. Vulnerabilities had been identified, legacy systems understood, and security teams’ concerns raised.

Incidents linked to operational disruption and third‑party dependencies showed that risks are frequently tolerated for too long, until an event forces a response. What turned these risks into breaches was not lack of awareness but acceptance.

That acceptance is rarely deliberate or careless. Competing priorities, budget constraints, and operational pressures are realities for most organisations. However, when risk decisions lack clear ownership, review timeframes, or escalation beyond the security function, they tend to quietly persist.

When you look closely, many breaches are not caused by a lack of technology, but by how decisions about risk are handled. Security teams often flag issues, but the business ultimately decides whether to live with the risk. When those decisions are clearly recorded, owned, and reviewed regularly, risks are managed. When they are not, the same issues tend to resurface later as incidents, often at the worst possible time.

What This Means Going Forward

The most effective CISOs aren’t responding to last year’s breaches by throwing more tools at the problem or rewriting every policy. They’re focusing on a smaller set of disciplines and doing them properly. These include:

  • Paying closer attention to fundamentals that drift over time.
  • Testing what actually happens, not what should happen.
  • Being more explicit about trade‑offs, risks, and decisions.

Above all, those that will be successful are resisting the temptation to view breaches as anomalies. Last year’s incidents weren’t one off events. They were signals, and, in many cases, they could be previews.

The real question isn’t whether your organisation could appear in next year’s headlines. It’s which of last year’s stories already feels a little too familiar.

If that question makes you uncomfortable, it’s probably the right place to start.

Share :