‘Vibe Hacking’: Criminals Are Weaponizing AI With Help From Bitcoin, Says Anthropic

The Cryptographic Fix for US Elections Is Still Sitting on the Shelf



In brief

A new Anthropic report says cybercriminals are using AI to run real-time extortion campaigns, with ransom notes using Bitcoin as the payment rails.
North Korean operatives are faking technical skills with AI to land Western tech jobs, funneling millions into weapons programs, often laundered through crypto.
A UK-based actor is selling AI-built ransomware-as-a-service kits on dark web forums, with payments settled in crypto.

Anthropic released a new threat intelligence report on Wednesday that reads like a peek into the future of cybercrime.

Its report documents how bad actors are no longer just asking AI for coding tips, they’re using it to run attacks in real time—and using crypto for the payment rails.

The standout case is what researchers call “vibe hacking.” In this campaign, a cybercriminal used Anthropic’s Claude Code—a natural language coding assistant that runs in the terminal—to carry out a mass extortion operation across at least 17 organizations spanning government, healthcare, and religious institutions.

hashflare

Instead of deploying classic ransomware, the attacker relied on Claude to automate reconnaissance, harvest credentials, penetrate networks, and exfiltrate sensitive data. Claude didn’t just provide guidance; it executed “on-keyboard” actions like scanning VPN endpoints, writing custom malware, and analyzing stolen data to determine which victims could pay the most.

Then came the shakedown: Claude generated custom HTML ransom notes, tailored to each organization with financial figures, employee counts, and regulatory threats. Demands ranged from $75,000 to $500,000 in Bitcoin. One operator, augmented by AI, had the firepower of an entire hacking crew.

Crypto drives AI-powered crime

While the report spans everything from state espionage to romance scams, the throughline is money—and much of it flows through crypto rails. The “vibe hacking” extortion campaign demanded payments of up to $500,000 in Bitcoin, with ransom notes auto-generated by Claude to include wallet addresses and victim-specific threats.

A separate ransomware-as-a-service shop is selling AI-built malware kits on dark web forums where crypto is the default currency. And in the bigger geopolitical picture, North Korea’s AI-enabled IT worker fraud funnels millions into the regime’s weapons programs, often laundered through crypto channels.



In other words: AI is scaling the kinds of attacks that already lean on cryptocurrency for both payouts and laundering, making crypto more tightly entwined with cybercrime economics than ever.

North Korea’s AI-powered IT worker scheme

Another revelation: North Korea has woven AI deep into its sanctions-evasion playbook. The regime’s IT operatives are landing fraudulent remote jobs at Western tech firms by faking technical competence with Claude’s help.

According to the report, these workers are almost entirely dependent on AI for day-to-day tasks. Claude generates resumes, writes cover letters, answers interview questions in real time, debugs code, and even composes professional emails.

The scheme is lucrative. The FBI estimates these remote hires funnel hundreds of millions of dollars annually back to North Korea’s weapons programs. What used to require years of elite technical training at Pyongyang universities can now be simulated on the fly with AI.

Ransomware for sale: No-code, AI-built

If that weren’t enough, the report details a UK-based actor (tracked as GTG-5004) running a no-code ransomware shop. With Claude’s help, the operator is selling ransomware-as-a-service (RaaS) kits on dark web forums like Dread and CryptBB.

For as little as $400, aspiring criminals can buy DLLs and executables powered by ChaCha20 encryption. A full kit with a PHP console, command-and-control tools, and anti-analysis evasion costs $1,200. These packages include tricks like FreshyCalls and RecycledGate, techniques normally requiring advanced knowledge of Windows internals to bypass endpoint detection systems.

The disturbing part? The seller appears incapable of writing this code without AI assistance. Anthropic’s report stresses that AI has erased the skill barrier—anyone can now build and sell advanced ransomware.

State-backed operations: China and North Korea

The report also highlights how nation-state actors are embedding AI across their operations. A Chinese group targeting Vietnamese critical infrastructure used Claude across 12 of 14 MITRE ATT&CK tactics—everything from reconnaissance to privilege escalation and lateral movement. Targets included telecom providers, government databases, and agricultural systems.

Separately, Anthropic says it auto-disrupted a North Korean malware campaign tied to the infamous “Contagious Interview” scheme. Automated safeguards caught and banned accounts before they could launch attacks, forcing the group to abandon its attempt.

The fraud supply chain, supercharged by AI

Beyond high-profile extortion and espionage, the report describes AI quietly powering fraud at scale. Criminal forums are offering synthetic identity services and AI-driven carding stores capable of validating stolen credit cards across multiple APIs with enterprise-grade failover.

There’s even a Telegram bot marketed for romance scams, where Claude was advertised as a “high EQ model” to generate emotionally manipulative messages. The bot handled multiple languages and served over 10,000 users monthly, according to the report. AI isn’t just writing malicious code—it’s writing love letters to victims who don’t know they’re being scammed.

Why it matters

Anthropic frames these disclosures as part of its broader transparency strategy: to show how its own models have been misused, while sharing technical indicators with partners to help the wider ecosystem defend against abuse. Accounts tied to these operations were banned, and new classifiers were rolled out to detect similar misuse.

But the bigger takeaway is that AI is fundamentally altering the economics of cybercrime. As the report bluntly puts it, “Traditional assumptions about the relationship between actor sophistication and attack complexity no longer hold.”

One person, with the right AI assistant, can now mimic the work of a full hacking crew. Ransomware is available as a SaaS subscription. And hostile states are embedding AI into espionage campaigns.

Cybercrime was already a lucrative business. With AI, it’s becoming frighteningly scalable.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest