The Five Assumptions That Lead to Breaches

We live in a world where we’re always on the lookout for cyber incidents, and for the most part, we form our own preconceptions about them, including a tendency to think of them as something deliberate.

For many people, there’s the notion that attacks are highly targeted and carried out with sophisticated techniques. Everything, it’s presumed, is clearly planned and carried out from start to finish by someone on the outside trying to get in.

But the reality, actually, differs massively from this; many breaches don’t start with advanced tactics - they just start with assumptions.

Not a whole load of reckless decisions, or deliberate neglect. Just everyday beliefs about how systems work, how risk applies, and where responsibility sits. And when those assumptions are wrong, they create gaps that attackers are more than happy to exploit.

According to UK government figures, around half of small and medium-sized businesses experienced a cyber breach or attack in the past year.

So if the thinking is “it won’t happen to us”, “we’re too small to be hit”, or anything along those lines, the data already says otherwise.

Here are five of the most common assumptions we still see, and why they’re worth revisiting.

1. “We’re Too Small to Be Targeted”

This is probably the most persistent belief, and it’s one that comes up again and again when we talk to companies across the UK. 

On the surface? Sure, the logic makes sense. Why would an attacker go after a smaller organisation when there are bigger, more valuable targets out there? Surely the bigger their bank accounts, data and presence, the more lucrative, right? Sort of.

Here’s the uncomfortable truth about cyber attacks - most of them aren’t targeted, they’re just automated and sent out to as many people, companies and domains as possible.

Anything from phishing campaigns to credential stuffing or vulnerability scanning doesn’t discriminate based on company size. They look for weaknesses, and wherever those weaknesses exist, they’ll be exploited.

In many cases, smaller organisations are actually more attractive for reasons such as having:

  • Fewer resources.

  • Less formal security processes.

  • Lower visibility.

With that in mind, size doesn’t matter after all.

2. “Our Vendor Handles Security”

Modern businesses, such as your own, all rely on third-party providers more than ever. Think of how many tools you use every day, such as cloud platforms, SaaS tools, or even managed services, and those providers do take on a significant share of the security responsibility.

But not all of it.

Most services operate under a shared responsibility model, which means the provider secures the platform, whilst you secure how it’s used. And the grim reality is that, depending on how deeply the platform is embedded, both can provide direct access to your systems.

All of the things in your control include:

  • Access control

  • User behaviour

  • Data handling

  • Configuration

If those areas aren’t managed properly, even a secure platform doesn’t eliminate risk; it just changes where it sits.

3. “We Would Notice If Something Was Wrong”

If something serious happened to your business, surely there would be signs... Right? This is one of the more reassuring assumptions that occasionally comes up.

The answer, in a word, is “no”.

The reality is that many breaches go undetected for weeks or even months because attackers don’t tend to make noise when they enter your network, and for good reason, too.

If you go back to Easter 2025, we saw the M&S hack make headlines, which perfectly illustrates how wrong this assumption is.

In this instance (which ties into Assumption #2, too, as they gained access via a third-party), the hackers gained access in February and did little to raise alarm until April, outside of some exfiltration in March. By the time they struck over the Easter weekend, the damage was enough to cause considerable harm.

So, the longer attackers remain unnoticed, the more value they can extract, as they pick up on the nuances of how the ‘lucrative’ individuals communicate, where people store sensitive data, how invoices are usually paid, and all that other good stuff to help their attacks.

Subtle indicators that often get missed to alert you that someone could be inside your system are:

  • Unusual login patterns

  • Minor configuration changes

  • Small data movements

  • Unexpected system behaviour

Without monitoring, logging and alerting in place, those signals don’t always surface, and then, by the time something is clearly “wrong”, the damage has often already been done.

4. “We Passed a Security Assessment, So We’re Covered”

Certifications and assessments are useful; it’s part of why we push people to get Cyber Essentials+ certified.

They provide structure, demonstrate intent, highlight gaps at a specific point in time, and, in the case of CE+, provide external validation that your defences are doing what they should.

But they aren’t permanent, and environments change.

New users join.
Systems are updated.
Configurations drift.
New tools, such as AI, are introduced.

And then, all of a sudden, what was secure six months ago may just not be secure today.

So if you’re treating a certification as a finish line, rather than a checkpoint, it creates a false sense of security.

Remember, security isn’t static. It needs to be maintained.

5. “It Won’t Happen to Us”

This is the underlying thread that ties everything together, and it’s not always said out loud; often, it’s implied.

It shows up in delayed system updates.
In postponed security reviews.
In decisions to “leave it for now, because we’d rather spend money on something else”.

All of these are perfectly fine, so long as you accept the very real risk you’re welcoming onboard.

If you just assume that the risk applies elsewhere, to others in your sector and not you, then you’re arguing with indisputable data.

If half of SMEs are experiencing breaches, the idea that it only happens to others doesn’t really hold up.

Why These Assumptions Matter

None of these beliefs comes from a bad place.

They’re usually based on:

  • Limited visibility

  • Time pressure

  • Competing priorities

  • Tighter and tighter budgets mean less investment in security

  • Reasonable but incomplete understanding

But in cybersecurity, small gaps tend to add up: an overlooked update here, an overly permissive access rule there, and a missed alert somewhere else.

Individually, you’d be well within your rights to say “not doing one update isn’t significant”, but when they start to compound together, they create an opportunity.

A More Useful Way to Think About It

Instead of asking “Are we likely to be targeted?”, a better question is: “Where are we exposed?”

When you shift thinking toward that angle, you see how it changes the conversation and moves focus away from perceived importance and towards the actual, tangible risk.

And when you’re in that frame of mind, it encourages a more practical approach where you ask:

  • What systems do we rely on?

  • Where does our data sit?

  • Who has access to it?

  • How would we know if something changed?

If you know the answer to those questions, then you know how to reduce risk in a meaningful way.

Final Thought

Cybersecurity isn’t just about technology.

It’s about understanding how your environment actually operates, your users, your behaviours, and where assumptions might be filling in the gaps.

Unfortunately, most breaches don’t start with something dramatic; they tend to start with something that felt safe, familiar, or unlikely to go wrong.

And in many cases, that’s exactly what makes them effective.

Next
Next

Don’t Be Fooled: 5 Phishing Tactics to Watch Out for (And How to Spot Them)