U.S.-based artificial intelligence (AI) company, Anthropic, has revealed that its chatbot Claude was misused by hackers in a major cybercrime operation. The attack affected at least 17 organisations, including hospitals, government offices, emergency services, and even some religious institutions.
According to Anthropic, the hackers relied on Claude to automate nearly every step of the attacks. The AI helped find weak points in computer systems, create ransomware, calculate ransom amounts that sometimes exceeded $500,000, and even draft convincing extortion emails. The AI also suggested which stolen data would cause the most pressure on victims.
“Hackers used AI in ways we haven’t seen before,” said Anthropic. “Claude helped make both tactical and strategic decisions, like which data to steal and how to demand payment.”
Cybersecurity experts warn that AI is making it easier for criminals to carry out sophisticated attacks. “The time needed to exploit vulnerabilities is shrinking rapidly,” said Alina Timofeeva, an adviser on cybercrime and AI. “Companies need to act before attacks happen, not just after.”
The misuse of Claude went beyond ransomware. Anthropic reported that North Korean operatives used the AI to create fake profiles and apply for remote jobs at U.S. tech companies. Once hired, the AI helped polish résumés, translate messages, and even assist in coding tasks.
“Agentic AI can help them get hired despite cultural and technical barriers,” said Geoff White, co-presenter of the podcast The Lazarus Heist. “Companies may unknowingly pay North Koreans, breaking international sanctions.”
Anthropic said it has shut down the malicious accounts, shared its findings with authorities, and introduced safeguards to prevent similar misuse in the future. CEO Dario Amodei emphasised that while AI has tremendous potential, it also carries real risks that need constant attention.
Experts warn this may be just the beginning. As AI tools become more powerful and widely available, phishing emails, ransomware attacks, and fraud schemes could become even smarter, harder to detect, and more damaging.
“Organisations need to treat AI like any other sensitive information—it must be protected,” said Nivedita Murthy, senior security consultant at cybersecurity firm Black Duck.
The case highlights a new reality that AI isn’t just helping people work smarter, but is also helping criminals work smarter, too. With attacks now faster, more automated, and more convincing, companies and individuals alike are being urged to step up cybersecurity measures before it’s too late.