Disclaimer: Opinions expressed are solely my own and do not reflect the views or opinions of my employer or any other affiliated entities. Any sponsored content featured on this blog is independent and does not imply endorsement by, nor relationship with, my employer or affiliated organisations.

Note: This is Part 1 of two series around AI influence on the SecOps job market


A Brief History of Freaking Out

The Luddites smashed textile machines in the 1810s because they feared losing their livelihoods. Over 200 years later, we're having the same conversation. Not about looms, but about LLMs.

The fear isn't that somebody will replace you in an org chart. It's that the thing you trained for, the thing you're good at, the thing that pays your rent, the thing that is, ultimately, a part of you might become irrelevant.

When we started working in SOCs, a big part of the day was checking indicators across multiple databases. Copy an IP, paste it into five different platforms, check if something shows up, document the results. Repeat. If somebody asked us today whether that job should still exist, we'd all say no. It makes no sense doing that manually when integrations and automations handle it in seconds.

But here's where it gets personal. For us in 2026, losing relevance could mean losing income. The average millennial should expect to change careers five or six times. Some of us are already on that path. But the question is whether we'll be switching by choice or because the tech forced our hand.

This question got a lot more real a few weeks ago when Block (the company behind Square and Cash App) cut 40% of its workforce, over 4,000 people. CEO Jack Dorsey said straight up that "intelligence tools have changed what it means to build and run a company" and predicted most companies would follow within a year. Investors loved it. The stock jumped 22%. For the people who lost their jobs, less exciting.

Now, whether Block's move was truly AI-driven or just pandemic overhiring correction with an AI narrative (there's strong evidence for both), the signal it sent was loud. And it made a lot of people in our industry nervous.

So the question that needs asking: will the jobs that currently define us become irrelevant? Will our industry reinvent itself? Will we survive this wave the same way we survived SOAR's promise that "the SOC is dead"?

We think the answer is more optimistic than the anxiety suggests, but it comes with some conditions.

The Short Answer

Yes, we will survive this wave.

Why? We'll walk through three arguments and stress-test each one. Some of this thinking was shaped by a talk at Apres Cyber Slopes Summit that helped cut through the noise and get back grounded.

Our Industry Is Different

We know, we know. Every industry says this. Every person who ever worked a job says "nobody can automate MY job because of the unknowns I face every day." My job requires intuition. My job requires split-second decisions. A system can't do that.

All of those statements are simultaneously true and false. But security actually has something unique going for it: the offense-defense arms race.

Security was, is, and always will be guided by a simple philosophy: every new defense technology will ignite the spark to create a new weapon.

We're already at the point where both sides use AI. Defenders have AI SOC solutions that automate investigations, speed up detection, ingest new log sources, and discover anomalies. Attackers didn't waste any time either. Every step of the kill chain has been supercharged since LLMs took over. Script kiddies who couldn't do enough harm before are now targeting higher-value assets with better tooling.

Let's take this to the extreme. Say everybody builds a perfect AI SOC that detects everything from day one. What happens the next day? Somebody builds something that manipulates that system into believing their activity isn't worth alerting on. Even in the most automated scenario, somebody will constantly need to detect, train, and alert on new attack patterns. And someone on the other side will keep finding ways to avoid detection.

That cycle doesn't end. It hasn't ended in thousands of years of warfare, and it won't end because we have better chatbots.

Organisational Resistance to Change

The market will force change eventually. But can you imagine the entire population of CISOs saying "yes, please cut 50% of my headcount because AI handles it now"?

Think about what a CISO's leverage is within a company. It's partly about the team they lead. If your headcount drops from 50 to 5, your influence in the executive suite drops with it. No C-level wants that.

And this isn't just about office politics. It's about legal obligation. Which brings us to what we think is the strongest argument in this entire article.

The Compliance Reality Check

This is the argument we think most people are missing.

On one side, fear that AI replaces SOC analysts. On the other side, a reality check. Most security programs are compliance-driven. Many orgs invest in a SOC to pass an audit, not because they love detection engineering.

So what do the compliance frameworks actually require?

We looked at SOC 2, PCI DSS, HIPAA, ISO 27001, NIS2, DORA. Here's what we found:

None of them say humans must do the triage. None of them require a human staring at alerts 24/7. They want continuous monitoring, detection capability, incident response, and evidence. They don't care if the entity doing the work is carbon-based or silicon-based.

Where humans ARE explicitly required: breach notification decisions, risk ownership, audit attestation, governance accountability. NIS2 and DORA put personal liability on executives for security failures.

DORA is the most prescriptive framework out there. Even DORA does not mandate human staffing models. It mandates outcomes.

So the question is not "will AI take SOC jobs." The question is: who signs off that the AI is doing a good job?

That person needs to exist. They need to understand what the automation does. They need to own the risk when it breaks.

When Filip posted this analysis on LinkedIn, Anton Chuvakin called it "quote of the day." And he's right, it IS a big deal. The frameworks already support AI-driven security operations. The real gap isn't technology or regulation. It's the accountability layer between the two.

So What Roles Actually Emerge From This?

The LinkedIn discussion got interesting when Rob Fry jumped in. His point: a lot of people talk about AI as though the vendor ships it, blesses it, and somehow keeps it tuned forever. That's not realistic.

Customer environments, data, workflows, risk tolerance, and operational weirdness are too specific. Vendors can provide the engine, but customers own the care and feeding. Which means you need people who can design, run, validate, and govern AI-driven security systems.

Rob described what he sees as likely new (or evolved) roles:

  • Architects to design how AI fits into the control plane

  • Operators to monitor, tune, and maintain it in production

  • Governance folks to own risk, evidence, and accountability when it fails

  • Hybrid roles that sit at the seams between security, engineering, operations, and the business

This lines up with what SACR's recent research on AI SOC and MDR shows. The market is splitting between orgs that can run AI platforms in-house (large enterprises augmenting internal teams) and those that outsource to AI-native MDR providers. Both paths need people. Different people than before, but people.

Will the SOC Analyst Become a QA Role?

Will the SOC analyst role mainly be about QA, just putting the stamp that says "looks good"?

We think partially yes. Someone has to validate the AI's conclusions. But calling it QA undersells what's actually needed. It's more like the person who signs off on the autopilot before the plane takes off. You need to understand the system deeply enough to know when it's working and when it's about to fly into a mountain. The skill set changes. The responsibility doesn't shrink.

And remember how long it took governments to even define what a "data breach" means under GDPR? Now imagine them rewriting accountability laws for fully autonomous security operations. We're not even close to that conversation in most jurisdictions. The current trend is actually toward MORE regulations that require MORE people in different regions, not fewer.

What If We're Wrong?

Fair question. Let's steelman the scary scenarios.

What if the arms race gets solved? Say we get several dozen AI SOC solutions that reach a maturity level where they integrate automatically, train themselves, and run continuous simulations that make any new attack detectable before it becomes real.

This is still sci-fi. It ignores the premise that human innovation and creativity haven't been replaced. The integration work alone is something we can't yet imagine at that scale. And the energy costs would be significant.

What if organizations stop resisting? This one we think is actually the most likely to happen eventually. The market will chip away at resistance once the technology matures. But in such a scenario, it's not just security teams that shrink. Every vertical in an enterprise would be affected, leading to smaller companies across the board.

And that creates a Ford-type dilemma: if nobody is hiring, who buys the products we're making? The "we are building AI for AI" story doesn't solve this economic question.

What if regulations change? Removing rules and regulations requiring human accountability in cybersecurity would be one part of a much larger systemic shift. If you want a historical parallel, look at Khrushchev trying to break down Soviet bureaucracy. Or Gorbachev, who arguably succeeded. Dark humor version: the operation was a success, the patient is dead. The USSR disappeared. Point being, forcefully simplifying complex regulatory systems tends to have consequences way beyond what you planned for.

And remember how long it took governments to define "data breach" under GDPR? If governments become efficient and rational enough to rewrite accountability laws across the board, the entire world would look fundamentally different. The chance of this happening in our industry alone would be a strange mathematical anomaly.

The New Shape of Things

If you've been following the secops-unpacked blog, you know we keep coming back to this: the Tier 1 SOC analyst role as we knew it is disappearing. And it should. It had the worst retention rates, the highest burnout, and it was never a real career destination. It was always a stepping stone.

But the roles replacing it are more interesting. Detection engineering. Security automation. AI validation and governance. These aren't downgrades. They're upgrades.

The three of us all started as SOC analysts. None of us do that job today. We all evolved because the industry evolved. The difference now is that the pace of that evolution is faster. But the pattern is the same: old roles get automated, new roles get created to manage and improve that automation.

Closing Thoughts

The world is changing fast. None of us know what the future holds.

But we do know this: scenarios where our industry faces massive unemployment exist. We just think they're highly unlikely within the next five to ten years.

If we reach a point where cybersecurity jobs are irrelevant in a decade, it means we've had such a massive shift in how society works that job security will be the least of our problems.

For anyone starting their career right now and wondering where to head: the compliance frameworks aren't going away. The accountability layer between AI and business outcomes isn't going away. The arms race between offense and defense isn't going away. Those are your career anchors.

The real question isn't whether AI will take your SOC job. It's whether you'll be the person who signs off that the AI is doing its job right. Position yourself for that, and you'll be fine.

What's your take? Where do you see the roles that AI can't replace in cybersecurity? What arguments do you have to say we won't be unemployed in five years? Drop a comment, we want to hear it.

Join as a top supporter of our blog to get special access to the latest content and help keep our community going.

As an added benefit, each Ultimate Supporter will receive a link to the editable versions of the visuals used in our blog posts. This exclusive access allows you to customize and utilize these resources for your own projects and presentations.

Reply

Avatar

or to participate

Keep Reading