As generative AI becomes a symbol of productivity, an incident where hackers exploited Claude to infiltrate the Mexican government systems and steal 150GB of sensitive data has sounded a global cybersecurity alarm: the dual-edged nature of AI is accelerating cyberattacks into a new era.
(Background: Job interviews turned into North Korean hacker traps! PurpleBravo infiltrates over 3,100 IP addresses, with AI and cryptocurrency companies as prime targets)
(Additional context: No surprise—North Korean hackers are now using AI for scams, “chat phishing” on platforms like LinkedIn to steal cryptocurrencies)
Table of Contents
Toggle
According to Bloomberg, a hacker used generative AI tools to successfully infiltrate multiple Mexican government systems and steal up to 150GB of sensitive data. This incident was uncovered by cybersecurity firm Gambit Security, shocking the international cybersecurity community and once again ringing the alarm on AI misuse.
This case not only demonstrates the powerful capabilities of generative AI on a technical level but also highlights its “double-edged sword” nature: while increasing efficiency and productivity, it can also serve as an accelerant for cybercrime.
Based on analysis by Gambit Security researchers, the attacker did not rely solely on traditional manual coding of malicious programs but extensively used Claude to assist throughout the attack process.
First, the hacker repeatedly used “jailbreak” prompt engineering to bypass Claude’s built-in safety restrictions, enabling it to generate content related to attacks that were originally prohibited.
After overcoming these restrictions, AI was used in several critical steps, including:
The entire attack process was highly automated, significantly reducing the time and professional skill required for hacking activities, and increasing the success rate. Ultimately, the hacker managed to steal a total of 150GB of sensitive files from multiple Mexican government agencies.
The incident affected multiple government departments, with leaked data being highly sensitive and valuable, including personal taxpayer information, tax records, and voter registration data. If misused, such data could lead to identity theft, financial scams, and even impact election integrity.
There is currently no evidence that these data have been publicly sold or further exploited, but the Mexican government and international cybersecurity communities are highly alert and are conducting follow-up investigations and preventive measures.
This case is not isolated but represents one of the latest examples of “weaponized” generative AI tools. Previously, hackers had to research vulnerabilities and write code themselves, which required higher technical barriers; now, with powerful language models, even attackers with limited skills can quickly produce professional-grade attack tools.
Research indicates that AI can not only help identify system weaknesses but also plan attack workflows and optimize strategies, thereby increasing both the scale and efficiency of cybercrime.
In the future, governments, enterprises, and AI developers will need to work more closely to strengthen model security, monitor abnormal usage, and enhance overall cybersecurity defenses to safeguard digital boundaries in the AI era.