Security leaders want machines that can read adversaries the way analysts do. There is clear business value in AI-powered automation engines that can parse threat reports, extract the behaviors that matter, and pinpoint where organizations need to improve their defenses.
The rise of general-purpose LLMs makes this feel within reach. If an AI can summarize complex text, maybe it can read CTI and tell SOC analysts what to defend, too.
Unfortunately, that isn’t the case. Threat comprehension is too demanding for general-purpose LLMs. Even with larger context windows and more parameters, the fundamental architecture these systems use can’t reliably generate the results security leaders are looking for.
Threat comprehension is the disciplined act of turning unstructured reporting into operational intelligence that defenders can act on. It goes far beyond extracting tactics or summarizing malware behavior. It requires identifying the exact techniques an adversary used, the environmental conditions that made them possible, and the defensive signals that would reveal them in practice.
Effective threat comprehension also depends on strict alignment to shared frameworks. MITRE ATT&CK, D3FEND, and related models give organizations a common language for describing behavior, identifying defensive gaps, and measuring control effectiveness.
This structure matters. Each procedure must capture commands, file paths, parameters, privileges, software dependencies, and execution context in a repeatable format analysts can validate, and machines can compare at scale.
Without this level of precision and schema enforcement, downstream activities break. Coverage mapping, confidence scoring, prioritization, and CTEM workflows all rely on consistent, structured threat objects. General-purpose LLMs aren’t designed to maintain structure in that way.
General-purpose LLMs excel at producing fluent text, but they falter the moment precision, consistency, and structured interpretation become mandatory. Their underlying training and architecture work against the requirements of threat comprehension in four principal ways:
Analysts rely on models like MITRE ATT&CK and D3FEND because they provide a shared language for describing how adversaries operate and how defenders can counter them. These vocabularies standardize techniques, defensive controls, data sources, and relationships. Without them, analysts can’t compare behaviors, measure coverage, and track gaps across tooling and environments.
This structure collapses if the underlying data is inconsistent. A single misaligned technique ID or an improvised label breaks the ability to correlate detections, optimize coverage, or evaluate defensive posture.
Engineering teams depend on precise, repeatable mappings to avoid false assumptions about what their stack can or cannot detect. Without strict schema enforcement, organizations end up with mismatched objects, duplicate behaviors, and coverage calculations they cannot trust.
Controlled vocabularies also make threat objects machine actionable. They allow procedures to feed cleanly into coverage maps, confidence scoring models, content validation, and broader CTEM workflows. If a system fails to preserve these exact identifiers, parameters, and relationships, downstream systems require painstaking manual revision.
Natural Attack Reading and Comprehension (NARC) is an industry first AI engine and purpose-built by Tidal Cyber to solve a single problem: converting unstructured threat reporting into precise, reusable, ATT&CK-aligned procedures correlated with groups, campaigns and software.
Its internal architecture reflects that purpose. Here’s a close look at the exact steps NARC follows to comprehend and parse for procedural intelligence:
NARC’s structured outputs flow directly into Tidal Cyber’s defensive analytics engine, enabling workflows that generic LLMs cannot support. Coverage Maps use procedure-level attributes to show what a stack can detect or block, based on real adversary behavior details rather than theoretical technique support. This precision exposes meaningful gaps, eliminates guesswork, and gives detection engineers and architects a reliable view of defensive readiness.
The same dataset powers the Confidence Score and defensive stack optimization. Confidence Scores summarize mapped coverage against procedure-level behavior. This closes the loop between adversary behavior and defensive performance.
Teams also see where tools overlap, where configurations underperform, and where investments produce real value. This closes the loop between adversary behavior and defensive performance, turning threat intelligence into measurable operational improvement.
Tidal Cyber is the first true Threat-Led Defense platform built to flip the traditional defensive model by putting real adversary behavior at the center of your defense strategy.
By mapping techniques, sub-techniques, and procedures to ATT&CK, we reveal exactly where you’re exposed and how attackers actually operate. It’s a level of precision you’ve never had before, empowering your security team to proactively reduce risk and optimize high-impact security investments.
Threat-Led Defense is Tidal Cyber’s unique implementation of Threat-Informed Defense, enhanced with procedure-level granularity to make CTI more relevant and actionable.