On 13 November 2025, Anthropic announced it detected and halted a large-scale, automated cyberattack from state-sponsored hackers. This sets the stage for a new era in cyberattacks and the methods used to detect and prevent them.
In the past, running even a basic cyberattack required expertise and time. Today, AI helps less-skilled attackers generate phishing emails, scan for weaknesses, or research attack techniques far faster than before. Tasks that once took hours could take minutes. This shift gives attackers an advantage they’ve never had: the ability to move at machine speed. This is not some pie in the sky alarmism; this is very real. This pattern is happening right now, as Anthropic revealed in its recent blog post.
Anthropic uncovered a state-sponsored cyber-espionage campaign where AI agents run the majority of tasks autonomously, targeting major tech firms, financial institutions, and government agencies. Anthropic argues that such threats mark an inflection point in cybersecurity and call for stronger detection, greater intelligence sharing, and safeguards across AI platforms. This is a fine set of suggestions, with lots of caveats attached.
The problem is that Anthropic’s solution to defend against the attack was to use its own AI. For the vendor AI to be both the source of and the solution to the problem seems more than a little self serving.
AI companies are working on safety/security features, but they need to do more. Filters that block harmful requests aren’t enough when attackers can simply rephrase prompts or chain together smaller requests. Providers must build stronger systems that detect suspicious patterns, identify harmful intent across multiple steps, and stop misuse before it becomes action. Collaboration with security teams and faster, more transparent safety updates are essential. Simply suggesting using its AI model to stop the attack is not enough. A firm plan is required to build confidence that the provider is taking this issue seriously.
Defenders can’t afford to wait when help may not be coming. Organizations need a data-driven security program that can detect threats as fast as AI can generate them. That means collecting clean, structured telemetry; correlating events quickly; and using analytics to spot unusual behavior in real time.
Many companies are making Cribl’s data engine for IT and Security the core of their security data program. Nothing else provides the same mix of rapid value, better data, and cost controls. Pairing high quality data with Cribl’s emerging AI tools, including Cribl Notebooks and agentic options, gives companies previously unavailable options to gain insight and drive action across their security program. Pair high quality data with strong hygiene practices that include patching, identity controls, segmentation, and backups to create a security posture that can contain and recover from attacks just as quickly as they appear.
AI will speed up attackers. Our defenses must be ready to move even faster.







