Security Brutalism Under Real Conditions, Part 7: Where This Goes Next
This post is different from the others. Parts 1 through 6 were a playbook. This one is a thinking-out-loud about where security brutalism and survivability engineering needs to go over the next one, three, and five years, given what AI and other technology shifts are doing to the threat environment. Some of this reflects what is already happening. Some of it is speculation. I am more interested in asking the right questions than in having the right answers, at least for right now.
The Frame I Keep Coming Back To
The survivability framework is built around a stable assumption: the systems you are defending are deterministic. They do what they are programmed to do, they behave consistently, and "restore to known-good state" is a meaningful operation because there is a known-good state.
That assumption is eroding, and I think the erosion is the central challenge for the next five years. Not the specific attacks, not the specific technologies, but the question of what survivability means when the systems being defended are themselves adaptive, non-deterministic, and in some cases not fully understood even by the people who deployed them.
AI is doing this in two directions at once: it is the new attack surface we have to defend, and it is increasingly the mechanism through which attacks arrive. Both sides of that dynamic get more complex over the next five years, not less.
The Next Twelve Months
What is already happening, accelerating.
The time available for exploitation has narrowed significantly. The gap between a vulnerability being known and being actively exploited in the wild has compressed to hours, or even less in many cases. AI-assisted vulnerability scanning finds and characterizes attack surface at a scale and speed that makes the old assumption of a grace period after disclosure unreliable. The implication is not "patch faster", because patching faster has a floor determined by testing and deployment cycles. The implication is that blast radius limitation and recovery capability matter more than they did when attackers moved slowly. A system that cannot be patched in time can still have a bounded blast radius.
Credential and identity attacks scale with AI. AI-assisted phishing is now personal, contextually accurate, and running at volumes that make statistically low success rates commercially viable. AI-assisted credential attacks try smarter, faster, and with more context about the target. The organizations that survive this are the ones that removed the value of a single stolen credential: MFA, passwordless, no standing access, no long-lived tokens with broad permissions. The credential is compromised. The attacker still cannot do much with it.
Agents are being deployed without security review. Organizations are giving AI agents access to production systems, customer data, and internal tools under the same assumptions they made about SaaS tools five years ago: it is useful, so we deploy it, and security reviews it later. Later tends to be after the first incident in that category. The agent security work is urgent now, not in two years.
The Next Three Year
This is where I am less certain and more interested in the questions.
Agentic malware. The logical extension of AI-assisted attacks is autonomous attack agents: systems that land in an environment, reason about what they find, and pursue high-consequence targets without human direction. Think William Gibson Neuromancer (from 1984!). They do not follow fixed attack paths or match known signatures. They adapt based on what they discover and may develop attack code to support what they encounter in the environment, as they continue the recon. The implication for detection is significant. Behavioral anomaly detection becomes more important, not less, because signature detection has nothing to match against an adaptive attacker. But behavioral detection against an adaptive adversary is also harder, because the behavior is designed to look reasonable at each individual step.
I don't think this makes the survivability framework wrong. It makes it more important. An adaptive adversary that moves intelligently through your environment still has to move through your environment. If that environment has bounded blast radius and limited lateral movement paths, the attacker's adaptability matters less. The question is whether the segmentation and access controls can be maintained under the operational pressure of a real environment changing in real time. The harder question is what happens when the defense is also agentic.
Both OODA loops compress at once. A defense agent that monitors, reasons, and responds without waiting for human approval can react faster than any security team. That speed is the point. But it creates its own failure mode: a defense agent racing to close a vulnerability window before an attack agent exploits it may push an untested patch to production, breaking something the attacker never would have reached. The fix causes the outage. Automated remediation already does this in simpler forms. The more capable the defense agent, the more consequential its mistakes. Speed without friction sounds like an advantage until the friction you removed was the human who would have caught the bad patch. Some of the most important design decisions for agentic security systems will be about where to force a pause, not how to eliminate all of them.
Social engineering at scale. AI-generated voice and video of known individuals is already good enough to use in targeted attacks, and will be better in three years. The attack that requires a human to call someone and impersonate their CFO can now be automated and run at scale. The human as a trust anchor for high-consequence decisions becomes less reliable. The survivability implication is that separation of duties and multi-party approval for irreversible actions become more important than they already are. If a realistic synthetic voice of an executive asking for an urgent wire transfer is not enough to trigger the action, the attack fails. If it is enough, the control is missing.
Compliance frameworks lag the threat by more than they already do. Most compliance frameworks are built around controls that address the threat environment of two to four years ago. The lag has always existed. The pace of the threat environment is accelerating, which means the lag is growing. Organizations that treat compliance as their security program will be increasingly misaligned with actual risk. This is a survivability argument, not a criticism of compliance as a category. The separation between survivability controls and compliance controls, discussed throughout this series, becomes more load-bearing over time.
The consequence map has to become a living document. Most consequence maps are built in a workshop and updated when someone remembers to schedule the refresh. Environments now change faster than annual reviews can track. A significant architectural change, a new agent deployment, or a new AI integration can change the existential list in weeks. The quarterly review cadence from Part 4 may not be sufficient in three years for organizations where the environment is changing at the pace that AI-assisted development enables.
The Five-year Horizon
Here I am speculating, and I want to be clear about that. These are the questions I think matter, not predictions.
The attack and defense boundary blurs. The active disruption layer discussed in Part 5, the specialist cell running deception operations and war-gaming outside the perimeter, will likely evolve toward AI-assisted continuous operation. Automated war-gaming against the consequence map, adaptive deception architectures that update based on real adversary behavior, and continuous adversary intelligence without a human running every operation. The economics of a three-to-five person specialist cell may evolve toward a smaller human team operating a more automated capability.
The same evolution is happening on the attack side. If sophisticated attack operations become more automated, the human expertise required to conduct them goes down, and the number of actors capable of operating them goes up. The deterrence model that assumes a capable adversary is also a rare one gets weaker.
Survivability for AI systems becomes its own discipline. What does it mean to restore an AI system to a known-good state? A traditional system has a point-in-time backup and a restoration procedure. An AI system has weights, fine-tuning history, system prompts, tool configurations, memory stores, and interaction history. What is the equivalent of a clean restore? What is the equivalent of "this system is behaving as intended"? I don't think these questions have satisfying answers yet, and I think in five years the organizations that have worked through them will be in a meaningfully different position than those that have not.
The identity problem gets larger before it gets more manageable. Right now, most organizations have more service accounts and API keys than they know about. In five years, they will have more agents, more agent-to-agent trust relationships, more AI systems with tool access, and more automated processes acting on behalf of the organization than any human team can inventory manually. The identity and access management problem scales with the number of things that can act, and the number of things that can act is growing faster than the ability to manage them. I think the organizations that invest now in understanding their machine identity surface will be in a better position to manage the expanded version of that problem later.
The patch cycle breaks. AI-assisted vulnerability discovery will find vulnerabilities faster than organizations can patch them. This is partially already true and will be more true in five years. The survivability implication is that "reduce susceptibility" as a strategy has a ceiling, and the ceiling will be lower than it is today. The combination of blast radius limitation, fast detection, and tested recovery becomes more important as the expectation that all vulnerability windows can be closed becomes less realistic. The foundation of survivability engineering is designed for a world where some things will be compromised. That design assumption ages well.
Now for the Questions Still Playing Wild in my Head
A few things I don't have good answers to, that I think matter for the direction of this work, and security.
What is the survivability test for an AI system? For a traditional system, the test is: assume compromise, measure detect/contain/restore. For an agent, the compromise might be a jailbreak, a manipulated system prompt, or adversarial training data affecting the model's behavior. "Restore" has a different meaning. "Known-good state" has a different meaning. I think this question needs to be worked out, not just acknowledged.
How do you maintain situational awareness when your own environment is changing at AI-assisted speed? Development is faster. Deployments are faster. The gap between what the consequence map reflects and what is actually running grows faster. The cadence of review that was sufficient two years ago may not be sufficient today. The answer might be better tooling, or it might be that the review cadence has to change, or both. I am not sure.
How do you detect an adaptive adversary? An AI-assisted attacker that adjusts behavior based on what it finds does not match signatures. It may not match behavioral baselines either, if it is operating slowly enough to look like legitimate use at each step. The deception asset model is more resilient to this: a honeytoken fire is near-zero false positive regardless of how the adversary's behavior otherwise looks. But deception coverage is never complete. What else works against an adaptive adversary? I think the answer is something like: make the environment small enough and well-understood enough that any access is anomalous. That is the minimal footprint principle taken seriously. I am not sure how achievable it is in a complex enterprise.
What role does the survivability framework play when the organization is the AI? Not hypothetically far-future. Some organizations are already structured so that AI systems make a significant portion of operational decisions. When the "organization" whose survivability you are protecting includes AI systems as decision-making participants, what the framework is protecting and what compromise means both get more complicated.
What I Think Stays Constant
Through all of this, the things that do not change:
The question of how long you stay failed is still the right question. The speed of attacks makes it more urgent, not less relevant. The consequence map is still the foundation, because the systems that would end the business are still the ones that need the most protection, regardless of what the attack looks like. Recovery capability tested with evidence is still worth more than recovery capability assumed from documentation.
The survivability framework is built on the premise that some things will be compromised, and that the job is to limit damage and recover fast. That premise is more true in a world of AI-assisted attacks than it was when the framework was designed. The attacks are faster, more adaptive, and more automated. The environment is more complex. The assumption that you can prevent all of it is less tenable than ever.
Which means the foundation matters more than it ever has. And the work described in the first five parts of this series is more urgent, not less, as the threat environment gets faster and more capable.
That is the part I'm most confident about. The rest, well, I'm still working through.
More to come...
If you are thinking about any of this differently, or working on the questions above, I want to hear it. This series is a starting point for a conversation, not a settled answer. There is a lot ahead of us, security professionals. I think now, more than ever, is the time to go brutalist.