A new report gave further real-world evidence of a prediction we made one year ago: that AI would increasingly be used to facilitate known attack methods. While discourse continues to be dominated by concerns that AI will completely revolutionize the threat landscape (and last year did yield the first “AI-orchestrated” campaign), the new research underscores that attackers are more often using AI to accelerate traditional tradecraft, such as development of scripts for credential or browser data theft or malicious browser extensions, rather than to invent novel malicious behaviors.
Our newly published Campaign object derived from the report is the focus of this week’s update to the Tidal Cyber-curated “Trending & Emerging Threats” Threat Profile: Threats that have been observed leveraging AI to support their operations are tracked via a dedicated Tag in the Tidal Cyber knowledge base.
Threat-Led Defense commentary: These Sightings (and others linked to the new Campaign) highlight how many recent “AI-assisted” attacks in fact repackage known/existing tradecraft. For example, detection rules have existed for some time on publicly available repositories for elements of the behaviors showcased here (see for example here or here). A practical approach to AI threat assessment involves focusing coverage assessments around popular known techniques & implementations before expanding efforts into “art-of-the possible” TTPs.