THE SECURITY BRUTALIST

Security Brutalism Under Real Conditions, Part 1: Introduction

Modern security programs operate in environments that are no longer contained or predictable. Identities sprawl across systems, data moves through layers that are only partially understood, and integrations extend trust in ways that are rarely revisited. Many programs handle this complexity well enough in steady conditions. Under real pressure, however, that same complexity exposes gaps that were never fully addressed, and systems begin to fail in ways that were not anticipated.

In this series, we explore that condition and how a small number of changes to the application of Security Brutalism enable a more resilient security program.

Security Brutalism was created to focus on what holds under stress and to remove what does not. At the same time, many experienced security professionals have raised a consistent challenge. The philosophy is clear, but applying it in a structured, repeatable way inside modern environments has not always been straightforward. That feedback matters, because a philosophy that cannot be executed consistently will eventually be sidelined, regardless of how accurate it might be. What follows is not a change to Security Brutalism, but an evolution in how it is applied. Survivability Engineering provides the missing layer that turns the philosophy into an operational model. It brings structure, measurement, and repeatability to something that was intentionally minimal, without diluting its intent. The result is a way to work through modern complexity while staying anchored to outcomes that matter under real conditions.

The premise remains direct. Security is what survives contact with reality. Everything else accumulates as noise until it becomes risk.

This shifts how a modern security program is built. Security Brutalism rejects tool-heavy, compliance-driven approaches and replaces them with a requirement that every control, tool, policy, and process justify its existence against three conditions.

  1. Does it reduce susceptibility through real attack paths as they exist today, not as they are documented.
  2. Does it limit damage by constraining what an attacker can reach and execute.
  3. Does it reduce recovery time by improving detection, containment, and restore under pressure.

If it does not contribute to one of these, it is not neutral. It expands the attack surface and accelerates failure.

Survivability Engineering makes this measurable, with every system being evaluated across three dimensions that reflect how it behaves when it is actually attacked.

Susceptibility is defined by reality, not design. It is shaped by identities with excess access, undocumented data flows, inherited trust from integrations, and automation that operates with implicit authority. These conditions form the true attack surface, regardless of what the architecture suggests.

Damage defines the consequence of failure. It is not about which control was bypassed, but what the attacker can do next. Whether the impact remains contained or spreads across systems determines if the incident is manageable or systemic.

Recovery time determines how long the organization remains in a failed state. Detection speed, containment, and restore must be proven under stress. Assumptions do not survive contact with an actual incident.

These dimensions are interconnected. Weakness in one amplifies the others. A system with low susceptibility but slow recovery still fails for extended periods. A system with limited damage but high susceptibility invites constant compromise. Survivability only exists when all three are addressed together.

This model starts from a constraint most programs ignore. Entropy is constant. Security begins degrading the moment a system goes live. Access expands, integrations accumulate, teams change, and controls drift. There is no stable end state, only continuous decay that must be countered.

From that reality, one rule becomes unavoidable. If something does not reduce susceptibility, limit damage, or improve recovery time, it increases risk. There is no neutral complexity.

This forces a different set of questions:

  1. What actually fails when this system stops functioning under pressure. Not what the audit requires, but what the business loses in real terms.
  2. Where are the real attack paths today. Not in documentation, but through the identities, data flows, and trust relationships that exist in practice.
  3. How long does failure last. Not according to a plan, but based on what has been tested, exercised, and proven.

Most programs are not built to answer these questions consistently. They are built to demonstrate coverage, not survivability. They measure alignment instead of resilience, and activity instead of outcome.

Security Brutalism, applied through Survivability Engineering, corrects that by making survivability the only reference point.

Remember the four Laws of Security: know, harden, see, recover?

Know becomes a living understanding of identities, trust relationships, and data movement. Without it, susceptibility cannot be measured.

Harden becomes removal before addition. Anything that does not directly reduce risk is eliminated.

See becomes the ability to detect real compromise as it happens, not to report on what has already occurred.

Recover becomes a practiced capability to contain and restore under pressure, not a documented plan.

This model builds a program that continues to function even when it is no longer secure, rather than one that only appears secure. The question stays the same: when the system is hit, do you recover, or do you remain in a failed state.

That is the only outcome that matters, and it is the one most programs are not designed to face.