THE SECURITY BRUTALIST

Security Brutalism Under Real Conditions, Part 3: Knowing What You Have

Part 2 established the frame: survivability is the metric, and the question that matters is how long you stay failed when something goes wrong. Before anything else can work, that question requires an honest answer to a simpler one: what do you actually have?

Most inventories are often outdated or inaccurate, and most security teams know it. The CMDB has systems no one touches anymore and is missing ones built last quarter. The architecture diagram shows what someone intended two years ago. The identity list in the directory reflects hiring history more than current access. Every mature environment accumulates this kind of drift, and it accumulates silently. No one sets out to build a fictional inventory. It just becomes one under normal operating conditions.

This matters because you cannot evaluate susceptibility for a system you do not know exists. You cannot scope blast radius for a system in no inventory. You cannot recover quickly from a compromise when you learn which system was affected during the incident itself.

The goal of the inventory phase is not completeness on a spreadsheet. It is accurate, queryable knowledge of what exists, what it touches, and what it holds, good enough that you can answer "what can reach this system and what can it reach" within ten minutes for any consequential system. That bar is achievable. A full CMDB with perfect coverage is not, and chasing it delays the work that actually matters.

The fastest path to a working inventory is querying sources that already track what exists rather than building from scratch.

Your identity provider is the best first stop. Query it for every application with an SSO or SAML integration and you get every system employees authenticate to centrally in a single pull. This is faster and more accurate than asking teams to report what they own.

Follow that with your cloud provider's resource inventory across every account and region: running compute, managed databases, storage buckets, serverless functions, load balancers. If you have multiple accounts across cloud providers, pull them all. Orphaned accounts with running resources are common after team changes and acquisitions.

DNS gives you your external surface. Pull your full zone, enumerate subdomains, and check certificate transparency logs for any certificate issued against your domains, including ones you did not authorize. A subdomain you cannot explain is worth investigating before it becomes the path into something consequential.

Finance and expense records are underused for this. Accounts payable has a more accurate list of SaaS tools than IT does in most organizations, because payment approval creates a paper trail that adoption does not. Pull SaaS subscriptions from expense records and compare them against what IT knows about. The gap is your shadow IT surface, and it is almost always larger than expected.

After the automated sources, ask five people: the engineering lead, the IT lead, the product lead, the operations lead, and the most senior engineer who has been at the organization the longest. Give each of them thirty minutes and one prompt: list the systems your team owns or depends on that, if compromised or broken, would cause the most damage. That conversation surfaces institutional knowledge about legacy systems, internal tools, and dependencies that appear in none of the automated sources because nobody ever wrote them down.

Identity and Consequence

System inventory is the easier half. The identity inventory is where most organizations have their largest undocumented attack surface, and it is consistently underestimated.

Service accounts, API keys, OAuth tokens, CI/CD credentials, and machine identities outnumber human users in most environments by a factor of three to ten. They accumulate through integrations built and forgotten, automation that outlived its purpose, and team turnover where nobody knew to revoke what the departing person provisioned. They do not show up in headcount. They rarely appear in audits. They just sit there, holding permissions, waiting.

To find them query your identity provider for non-interactive and service accounts, pull IAM roles and service principals from every cloud account, check your CI/CD system for stored credentials, and run secret scanning against your code repositories. Tokens committed to source code or left in environment variables are a consistent and often unreviewed access path.

For each non-human identity, record what it has access to, when it was last used, and whether it has a documented current owner. Anything with no recent activity and no documented owner gets revoked now. Do not wait for the consequence map to revoke these. An unused credential with no owner is unambiguous attack surface regardless of what it touches. There is no scenario where keeping it is safer than removing it.

This brings us to the consequence map.

With a working system list and a reasonably complete identity inventory, you have what you need to do the work that drives every prioritization decision in the program: mapping what it actually costs when each system fails.

This is not a risk register. It does not need probability scores, maturity ratings, or CVSS numbers. It needs honest answers to one question per system: if this fails or is compromised, what does the business lose, and is that loss recoverable?

Three classifications:

Existential means the realistic worst case produces an outcome the organization does not recover from: regulatory action that shuts down operations, permanent loss of data the business is built on, financial liability that exceeds survival capacity, or reputational damage that permanently destroys customer trust. Calibrate existential to the organization's actual capacity, not to an abstract standard.

High-recoverable means costly but survivable: revenue loss measured in days or weeks, absorbable fines, customer attrition that can be addressed over time.

Low-impact means contained, readily restored, minimal business effect.

When uncertain between existential and high-recoverable, classify as existential. The cost of over-protecting something slightly less critical is lower than the cost of under-protecting something that turns out to matter more than expected.

The consequence map is built through structured conversations with system owners, not a form-filling exercise done independently by the security team. Security facilitates. The technical owner and business owner do the talking. For each system, bring in whoever owns it technically, whoever owns it from a business perspective, and any finance or legal stakeholder for systems touching revenue, regulated data, or contractual obligations.

Thirty to forty-five minutes per system. Keep a shared document open and record answers in real time where all participants can see.

The questions, in order:

  1. What does this system do? One sentence, business function, not architecture.
  2. What does it connect to, and what depends on it?
  3. What data does it hold or process?
  4. What happens if it is unavailable for one hour, one day, one week?
  5. What happens if its data is corrupted or destroyed, not just the system being down?
  6. What happens if its data is exfiltrated silently with no disruption to the service? Who could be impersonated with it? What obligations are triggered?
  7. What does recovery look like today? Has anyone timed a full restore?
  8. Is there any realistic scenario where compromise of this system produces an outcome the organization does not recover from?

Ask question eight last. People give more honest answers to the existential question after they have already described specific consequences in response to the earlier questions. Asked cold, the existential question almost always produces optimistic answers.

There are a few things you need to watch for while conducting these sessions:

Optimism bias is the most consistent problem in every mapping session. System owners underestimate the consequence of their own systems being compromised. Counter it by focusing the consequence question in what has already come out of the session: "You told me that admin credentials here would give an attacker access to your billing system and your customer database. What does that look like on day three?" That produces a different answer than asking about impact in the abstract.

Probability creep is when participants push to add likelihood scores to classifications. Redirect directly: the map uses consequence only. Likelihood estimates are too easy to adjust downward and too hard to defend. The question is what happens if compromise occurs, not how likely it is. How likely it is changes based on who is targeting you, what they have already acquired, and dozens of other factors outside your control. What the damage looks like is more stable and more actionable.

Running sessions with only the security team produces a fictional map. The business and technical owners must be in the room. Security does not know what financial exposure a specific system creates, or what a customer trust failure looks like from the product side, or what the operational dependency chain is for a system three teams away.

The End Result

A working system list with owners and data categories. A non-human identity list with access and activity status. A picture of what is externally reachable and what it connects to internally. A consequence map with systems ranked by what failure actually costs, each classified as existential, high-recoverable, or low-impact, each with a documented realistic worst-case scenario and a recovery status: tested with a date, untested with a procedure, or unknown.

The systems at the top of the existential list are where step 1 depth-work starts. Complete trust relationship mapping, full data flow documentation, and tested recovery procedures go there first. You do not do that work for every system. You do it depth-first, starting where the consequences are worst.

The inventory and the consequence map are not sequential steps. A rough inventory gets you into the first mapping sessions. Those sessions surface gaps in the inventory. The consequence map tells you which gaps to close first. They build together, iteratively. What you end up with is better and more accurate knowledge of what you have and what it costs, and that is the foundation for every decision that follows.