THE SECURITY BRUTALIST

Brutalist Security AI Threat Modeling Process

Blaine W. sent this question over the weekend: How would you apply the principles you talk about in your blog to AI and AI-enabled systems threat modeling?

So, after some coffee and stuff, here’s a Brutalist Security threat modeling process for AI and AI-enabled systems, rooted in the Security Brutalism principles: honest, simple, visible, and resilient.

Follow the steps, ask the questions, and write down the answers.

Step 1: Name What’s Real

Brutalist Security Rule: If you can't draw the architecture on a napkin, you don't understand well enough.

Step 2: Define What You’re Defending

Brutalist Rule: Defend what matters, not what’s easy.

Step 3: Expose All Attack Surfaces

Brutalist Rule: No hidden corners. No magic boxes.

Step 4: Assume the AI Will Be Attacked

Brutalist Rule: Plan for failure. Build to survive, not just to function.

Step 5: Kill the Complexity

Brutalist Rule: Complexity is a liability, not a feature.

Step 6: Build Visible Defenses

Brutalist Rule: If you can't see it, you can’t secure it.

Step 7: Acknowledge What You Can’t Protect

Brutalist Rule: Honesty over false assurance.

Output: A Brutalist AI Threat Model

  1. One clear diagram: showing components and data flows.
  2. One-page table: mapping real threats to real defenses.
  3. One hard truth: what the AI system cannot do safely.

Note: The output requires review and adjustment, as the technology evolves too rapidly for this current approach to remain effective. That’s the on-the-ground reality. If you have any suggestions, I’d appreciate hearing them.


AI Threat Modeling Template

1. System Snapshot

Describe in plain language what the AI system does and how it fits into the architecture.

2. What Are We Defending?

List the critical assets in this system that must not be compromised. Below are some examples.

Asset Why It Matters What Happens If It Fails
Model Logic
Training Data
API
Input Validation Logic

3. Threat Surface Map

List potential attack vectors. Don't sanitize or over categorize.

Attack Surface Possible Threats Notes
Input Pipeline Prompt Injection
Training Pipeline Data poisoning, supply chain manipulation
Dependencies Compromised packages
Human Factors Overtrust, misused

4. Assume It Gets Attacked

What's your answer when this system is attacked?

Threat Scenario Detection? Containment? Recovery?
Adversarial Inputs
Prompt Injection
Data Poisoning

5. Kill The Complexity

Where can we cut clutter, remove fargile connections, or simplify trust boundaries?

6. Make It Visible

What's being logged? What's being monitored? What's being watched in real time?

7. One Hard Truth

What can this AI system do safely?

Summary Artifact

Attach to the security document: