Disclaimer: Opinions expressed are solely my own and do not reflect the views or opinions of my employer or any other affiliated entities. Any sponsored content featured on this blog is independent and does not imply endorsement by, nor relationship with, my employer or affiliated organisations.

The Fear of Not Doing Enough: Security's Workflow Problem

If you've been following this blog, you know I've spent a lot of time on AI transforming investigation, triage, and detection engineering. And a few months back I wrote about the single pane of glass, how it's not a product you buy but a system you build, piece by piece, like Legos.

That post was about the architecture. What tools do you need, how do they connect, where does data flow.

This post is about the layer underneath that nobody talks about. Not the tools. The work itself. Where it comes from, how it flows, and why we have zero visibility into most of it.

The Fear of Not Doing Enough

Security has this pattern I keep seeing everywhere.

New attack technique drops. A CVE trends on Twitter. Some threat intel report lands in your inbox with a fancy APT name. What happens next? Predictable.

Someone writes a generic detection rule fast so the team has "something." Gets pushed to production. Generates noise. Nobody tunes it because there's already another thing screaming for attention.

The false sense of coverage becomes more important than actual coverage.

I call this the Fear of Not Doing Enough. And honestly? It drives most of the operational pain in security teams today.

You write a detection rule. Do you have the SOP for when it fires? Do you know the full analysis path an analyst should follow? Can you estimate how that alert impacts your team's workload downstream? Do you know what "done" looks like for that alert type?

If you can't answer those, you didn't deploy a detection. You deployed a work generator with no operating manual. Multiply that across dozens of detections written under pressure and you get patchwork coverage that looks great on a dashboard but falls apart when someone has to actually operate it.

But here's the thing. Even if you fix all of that, you're still only looking at one input stream.

It's Not Just SIEM Alerts

A ACM Computing Surveys paper (Tariq et al., 2025) reviewed over 30 solutions to alert fatigue in SOCs. Thorough paper, I'll give them that. Identifies four root causes: staff shortage, high false positive rates, disconnected dashboards, and inefficient SOPs.

But every single solution assumes the work starts with a SIEM alert.

Now look, I'm not saying SIEM alerts are a small part of the work. For most teams they're probably more than half. But here's what matters: the work that doesn't come from the SIEM is often the most manual, least structured, and hardest to track.

IT escalations. Someone from the help desk pings you on Slack: "Hey, this looks weird." Access review requests from HR. Audit findings that need remediation tracking. Pen test findings that need to be assigned and fixed. Third-party risk questionnaires. Compliance asks from legal.

All real security work. And here's the thing about it: only some or none of it has a playbook or automation behind it. Most of it lives in Slack threads, email chains, and spreadsheets. It's the security work that runs entirely on copy-paste, tribal knowledge, and good intentions.

Your SIEM alerts, for all their problems, at least flow through a pipeline. They get enriched. They have some structure. Maybe even a SOAR playbook attached. The non-SIEM work? It's the Wild West.

Erik Bloch has been making this point for years.A lot of the work SOC is doing day-to-day has nothing to do with chasing advanced adversaries. It's tickets, reports, evidence collection, reconciling data across tools. The mundane operational grind that actually burns people out.

And here's the part that really gets me. Outside of very large enterprises that have 10 security sub-departments with dedicated teams for everything, the same 3-5 people triaging SIEM alerts are also pulling evidence for the auditor, handling the IT escalation, and answering the compliance questionnaire. There's no luxury of specialization. The alert queue is just one input stream among many. And the non-SIEM stuff eats time disproportionately because it's all manual.

Security Work Has No Gravity

Ross Haleliuk recently wrote a great piece about ServiceNow betting on "workflow gravity" to compete with the security platform giants. The thesis is simple. Whoever owns where work happens owns the decisions.

Data gravity pulls information into a single system of record. Your SIEM, your data lake, whatever. That part most teams have figured out. Workflow gravity is different. It pulls action into a single system of action. One place where work lands, gets triaged, gets tracked, and gets done.

Right now? Security work has no gravity. It's everywhere and nowhere.

And yeah, this connects directly to the single pane of glass conversation. In that post I talked about building your own platform, Lego-style, with assets, data layers, correlation, and response actions. But even if you build that beautiful architecture, it's still oriented around machine-generated alerts. The SIEM brain, the enrichment layer, the correlation engine. All of that assumes the input is a structured alert.

What about the IT manager who emails you about a suspicious contractor? What about the audit finding that needs 6 teams to remediate? What about the pen test report sitting in a shared drive that nobody has turned into action items yet?

That work has no architecture. It has no pipeline. It just shows up and someone deals with it however they can.

You want to know why security teams always feel understaffed? Part of it is real headcount shortage, sure. But part of it is that nobody can actually see where the time goes. When the most manual, time-consuming work lives outside of every system you've built, you can't measure it. When you can't measure it, you can't optimize it. When you can't optimize it, you just throw more people at it and hope for the best.

Process Mining Exists. Just Not for Us. Yet.

Here's something that gets me. In finance, procurement, and operations, tools like Celonis and Scribe Optimize have existed for years. They observe how work actually happens across tools and systems. They find bottlenecks. They tell you where time is wasted. They optimize based on data, not vibes and assumptions.

In security? Still very early days.

Some vendors are starting to take RPA-style approaches to There's a handful of academic papers exploring it. But it's nowhere near mainstream.

We still don't have good data on how security work actually flows end to end. Think about that.

We have terabytes of security telemetry. We can tell you exactly when a process spawned on an endpoint at 3:47am. But we can't tell you how long it takes an analyst to go from "alert fired" to "investigation complete." We can't tell you how much time the team spends on compliance requests versus actual threat work. We can't tell you which of your 200 detection rules generates the most operational overhead relative to the security value it provides.

That's wild.

Why This Is Hard

I get why the industry keeps gravitating toward the easier wins. Make investigation faster. Automate the playbook. Build a better ML model for triage. Those are well-defined problems with measurable outcomes.

Understanding where all security work happens and how it flows? That's messy. It crosses tool boundaries. It involves human behavior that doesn't fit neatly into event logs. It requires looking at the whole system, not just one piece.

This is the hardest problem to solve. And that's exactly why not many are tackling it yet.

But here's why it matters. If you don't understand the full picture of how work enters and flows through your security team, everything else you build is an optimization of a subsystem. You can make SIEM triage 10x faster, but if a third of the work comes from non-SIEM sources that are entirely manual, you just made one part of the problem better while the messiest part stays untouched.

What Would Actually Help

I don't think this needs to be one giant platform that replaces everything. But teams need a few things that barely exist today.

Workflow data. How long does each type of work actually take? Where are the handoffs? Where do things stall? What percentage of the team's time goes to which category of work? Right now most teams are guessing. And the guesses are usually wrong because the most painful work is the least visible.

Operational impact awareness. Before you deploy a new detection, onboard a new data source, or agree to a new compliance requirement, you should be able to model what that does to your team's capacity. Not after the fact when everyone's drowning. Before.

Connection between detection and process. If you have a detection but you don't have the analysis path mapped from it, you can't estimate how it impacts anything downstream. Every detection should ship with its SOP. Not as a nice-to-have. As a requirement.

The Fear Won't Go Away

The Fear of Not Doing Enough will always be there. New threats aren't going to stop coming. The pressure to have "something" for every new attack vector is real.

But the answer isn't to keep throwing generic detections at every new thing and hoping the team can absorb the blast. It's not to keep building faster investigation tools for one slice of the work while the rest drowns in Slack threads and spreadsheets.

We've been fixing the middle. Investigation is getting faster. AI triage is real. Response automation is improving. The single pane of glass architecture is getting clearer. All good progress.

Now it's time to zoom out. Understand how security work actually flows. All of it. Not just the structured, machine-generated part. Especially the messy, manual, human-generated part that eats the most time and has the least tooling.

Fix the input. Model the cost. Understand the workflow.

Stop optimizing the output of a system you've never fully mapped.

Join as a top supporter of our blog to get special access to the latest content and help keep our community going.

As an added benefit, each Ultimate Supporter will receive a link to the editable versions of the visuals used in our blog posts. This exclusive access allows you to customize and utilize these resources for your own projects and presentations.

Reply

Avatar

or to participate

Keep Reading