Disclaimer: Opinions expressed are solely my own and do not reflect the views or opinions of my employer or any other affiliated entities. Any sponsored content featured on this blog is independent and does not imply endorsement by, nor relationship with, my employer or affiliated organisations.
I’ve been going over a bunch of AI SOC implementations lately, and something hit me: control. It’s not just about having an “autonomous” system that investigates alerts. It’s about being able to see why it took a certain path, adjust the logic, and align it with your own environment. Without that, you end up with a black box that you can’t fully trust.
This isn’t just theory. In almost every SOC I’ve worked with or advised, I’ve seen how fragile it becomes when you either (a) rely fully on vendor defaults, or (b) build out hundreds of breakable playbooks by hand. Both lead to pain in different ways.
This edition is sponsored by D3 Morpheus
Mapping the AI SOC Landscape
When I look at AI SOC platforms, I break them down into two ways.
First, on which stage of the incident lifecycle do they focus:
Left - the data side: pipelines, log processing, detection engineering.
Middle - enrichment, triage, and investigations.
Right -response, remediation, and the feedback loop.
Some vendors specialize in one stage. Others stretch across multiple, but rarely all three.
Second, by implementation style:
Plug-and-Play -quick to deploy, usually centered on the middle (triage, investigations). You connect data sources, and it starts producing outcomes with minimal setup.
Build-and-Customize -more like next-gen security automation platforms. These let you create workflows from scratch, wire them to alerts, or even run “headless” background automations. They usually cover the middle and right, sometimes the left as well.
Over the last 2–3 years, most new AI SOC startups have landed in the middle with plug-and-play products. Security automation platforms lean toward build-and-customize.
Both approaches have tradeoffs, and understanding them is key.
The “Build-and-Customize” Trap
Build-and-customize platforms feel like classic real-time strategy games. You start with nothing and design everything yourself: the integrations, the escalation logic, the deterministic flows. You’re in charge.
That control is great, but it also means you own the complexity.
Here’s the trap: deterministic automation doesn’t scale well if you try to build one workflow for every single detection use case. In one SOC I worked with, the team had over 200 automations, each tied to a specific detection. The result? They needed more engineers than analysts just to keep workflows alive. Every time an API changed, a field was renamed, or a vendor shifted its schema, half the playbooks broke.
So while this model is powerful, it quickly becomes breakable. It works well if you’re building “macro” workflows (like log ingestion or enrichment pipelines). But if you try to encode every micro-decision into a deterministic playbook, you end up with a maintenance nightmare.
The “Plug-and-Play” Black Box
Plug-and-play AI SOCs are the opposite. Think of them as strategy RPGs: the world is pre-built, the rules are set, and you just play the role you’re given.
The upside is obvious: fast time-to-value. You connect your SIEM or log sources, and suddenly the platform is triaging, clustering, or even investigating alerts for you. For many orgs that are short-staffed, that’s appealing.
The downside is just as obvious: no visibility into how the logic works, and no way to adjust it.
What if the platform suppresses something you consider critical?
What if its default enrichment path doesn’t fit your environment?
What if you just want to add one custom validation step before remediation?
In most cases, you can’t. You’re locked into the vendor’s logic. And when your team doesn’t understand or can’t shape the process, trust in the tool drops fast.
Where Automation Already Works Well
Before talking about hybrids, let’s ground this in where automation already delivers value. From my own experience, the biggest wins are on the left side of the lifecycle.
Take log ingestion. Many SaaS tools don’t provide streaming log integrations. You either need to pull data via API or pay for a third-party connector. I’ve built automations that:
Fetch audit logs on a schedule,
Parse and normalize them,
Push them into S3 or another bucket,
Then feed them to a SIEM in the format it expects.
The benefit is twofold: the SIEM gets data in a clean, expected schema, and you offload parsing/storage from the SIEM, which saves costs.
In detection engineering, I’ve built workflows that enrich threat intel feeds, extract TTPs, and generate hypotheses for new detections. For example, if intel shows a threat actor shifting to a new initial access vector, the automation surfaces that and suggests where detection coverage may be missing.
There are also operational automations:
Creating backlog tickets automatically when a false positive is reported.
Triggering attack simulations when new detections are deployed.
Sending requests to red teams to validate coverage through adversary emulation.
These aren’t glamorous, but they save huge amounts of manual effort.
The Messy Middle
The middle of the SOC lifecycle isn’t just “messy;” it’s where the real detective work happens. You’ve pulled in enrichment and gathered context, and now you need to connect the dots into a coherent incident story.
This is harder than it looks. Evidence isn’t uniform. Sometimes you’re pulling infrastructure data and need to cross-check with change requests or asset management logs. Sometimes it’s endpoint behavior, where there is no clear baseline, users run all sorts of random processes on their machines, and you need to know what’s “normal” in this business context. Other times, you’re correlating identity data, asking whether a login pattern aligns with known behavior for that role, that region, or that application.
In practice, analysts spend much of their time bouncing between SIEM queries, log sources, and business systems, running searches and pulling fragments of context. The challenge is less about gathering raw data and more about stitching it into a narrative that makes sense. That’s why the middle ground has historically been so fragile for automation; you can’t just script it once and call it done. The process changes with every environment, every investigation, every new clue.
This is where AI could be transformative, if it helps analysts assemble evidence and propose connections, but still shows why it drew those links. Without that transparency, it’s just guessing in the dark.
A Hybrid Model: Autonomy with Guardrails
So instead of being stuck with a black-box tool or a maintenance nightmare, imagine something in the middle. A hybrid model is all about getting the best of both worlds: the speed and scale of AI automation, with the control and flexibility your team actually needs.
Here’s how I see it working: Instead of coding every playbook by hand, you just drop an alert into the AI SOC platform. From there, it generates a full investigation workflow on the fly.
But here's the key part, it's not a black box. You can see all the steps it plans to take, and you can fine-tune them as needed. You could even upload one of your existing runbooks from Confluence, and the platform would use it as a template to shape its automation logic.
I like to think of it with a gaming analogy.
The "City-Builder" Phase: This is the build-and-customize part where you lay down the rules. You set up the integrations, define your critical assets, and build deterministic guardrails, like "for any action on a domain controller, you must get human approval".
The "RPG" Phase: This is where the autonomous AI agents operate within the world you just built. They can investigate alerts, enrich data, and even suggest remediation, but they always have to follow the rules and stay on the roads you created.
This approach combines the strengths of both copilots and fully autonomous agents. Key capabilities usually include:
Balanced Autonomy: The system handles the routine, high-volume stuff on its own but knows when to stop and escalate tricky or high-impact decisions to a human analyst.
Flexible Exploration: Your analysts aren't locked into a rigid workflow. They can pivot from the AI's automated findings and start asking their own questions in an interactive chat, letting them dig deeper whenever they need to.
Customizable Logic: The AI's workflows can be tailored to fit your SOC’s specific needs, giving you a good balance between automated consistency and the flexibility to handle unique threats.
Ultimately, this balance, autonomy with guardrails, is what will make AI SOCs something we can actually rely on. It gives you a system that can handle the massive scale of alerts while making sure a human is still in the driver's seat for the decisions that really matter.
Final Thoughts
Look, at the end of the day, there's no magic answer here. It’s super important to figure out what fits your own environment. Both the plug-and-play and the build-it-yourself platforms have their place.
If you're short on people and just need something running fast, a plug-and-play tool is tempting. But you're stuck in their world, using their logic, and it's basically a black box. On the other hand, if you have a big engineering team, maybe building everything from scratch sounds good. But I've seen how that turns into a maintenance nightmare that costs a fortune to keep running.
This is where the hybrid model has some serious advantages. It's way faster to deploy than trying to build everything custom from the ground up, since the core AI investigation logic is already there. The cost of maintenance is also a lot lower because you’re not trying to keep hundreds of automation playbooks alive. And you still get the flexibility to tweak the workflows and make sure the system operates in a way you can actually trust. You get the speed of AI without having to give up all the control.
Vendor Spotlight: D3 Security
I recently had a demo of D3 Security’s Morpheus AI, and it stood out because it addresses the exact problem I’ve been discussing in this post: the need for autonomy with control.
When you drop an alert into Morpheus, it doesn’t just respond; it builds a full investigation runbook on the fly. What makes this different is transparency and flexibility: you can see every step, modify the workflow, and even audit the logic. That’s a big shift from black-box AI tools that give you no visibility into how decisions are made.
Morpheus can autonomously handle a large portion of Tier 1–3 tasks, triaging most alerts in under two minutes while integrating across more than 800 tools. It also provides the option to switch between fully autonomous execution and human-in-the-loop oversight. Every AI-generated workflow is visible as code, which means you can treat it like any other engineered artifact: you can version, test, and improve it. For analysts, the workspace is well thought out, with AI summaries, priority scoring, recommended actions, relationship analysis, and a dynamic incident/forensic timeline, plus many other widgets that can be used to customize the workspace.
For me, this hits the hybrid sweet spot: AI that’s autonomous enough to scale, but customizable enough to trust. If you’re looking at AI SOC platforms and want both speed and transparency, D3’s Morpheus is definitely worth a closer look.
🏷️ Blog Sponsorship
Want to sponsor a future edition of the Cybersecurity Automation Blog? Reach out to start the conversation. 🤝
🗓️ Request a Services Call
If you want to get on a call and have a discussion about security automation, you can book some time here
Join as a top supporter of our blog to get special access to the latest content and help keep our community going.
As an added benefit, each Ultimate Supporter will receive a link to the editable versions of the visuals used in our blog posts. This exclusive access allows you to customize and utilize these resources for your own projects and presentations.