A few years ago the idea of hackers using artificial intelligence to conduct cyberattacks sounded like science fiction. Today it is reality. Major technology companies have reported that threat actors are actively using AI tools to support hacking, improve social engineering, and automate the early stages of cyber intrusions. Artificial intelligence has brought enormous benefits to everyday life, allowing people to write more efficiently, analyze information quickly, and develop software with less effort. Yet these same abilities now enable attackers to work faster and more effectively than ever before. Understanding how hackers use AI and why this shift matters is essential as society enters a new era of intelligent cyber threats.
AI is now helping attackers in the real world
Cybercrime once required advanced technical expertise. Hackers needed to understand programming languages, network protocols, and the inner workings of operating systems. Artificial intelligence has changed the landscape. Modern AI assistants can generate code, explain complex concepts, and automate tasks with remarkable precision. Skills that once took years of experience can now be supplemented through a conversational AI interface.
A striking example came from a joint report by Microsoft and OpenAI. The companies revealed that state aligned groups from China, Russia, Iran, and North Korea had used commercial AI models for harmful purposes. These attackers asked AI systems to create phishing content, examine potential victims, develop malicious scripts, and study the behavior of software. Even though the AI models were not designed for malicious use, they offered structure and clarity that made the attackers work significantly more efficient.
Anthropic reported a more sophisticated case involving a Chinese state sponsored group that used its AI coding assistant Claude Code in an espionage campaign. According to public reporting Claude Code helped perform reconnaissance, evaluate system weaknesses, craft exploit code, and prepare data for exfiltration. Human operators made important decisions but the AI completed many of the technical steps. Some intrusions were successful. This marked one of the clearest examples of an AI tool acting not only as a researcher but as an active part of a cyberattack workflow.
Why AI allows attacks to unfold faster than ever
Speed is one of the most transformative advantages AI offers to attackers. The early stages of a cyberattack often involve scanning large volumes of information, summarizing findings, generating scripts, and testing ideas. These steps once required significant time and effort. AI performs them in moments.
In the Anthropic case the attackers divided their operation into small tasks that looked harmless when viewed individually. Claude Code processed each step rapidly which allowed the attackers to gather information, build exploit code, and test access attempts in a remarkably short period. The AI even generated documentation describing what it had done, making it easier to repeat the process.
Research suggests that AI can influence malware behavior as well. In one study experimental malware contacted an AI service during execution to rewrite parts of its own code, which helped it evade detection. This technique is effective because security software often identifies malicious programs by recognizing known patterns. If the program changes its behavior on the fly those patterns disappear. Although these experiments remain in the research phase they reveal how AI can accelerate the evolution of malware.
Scale magnifies the problem. A single attacker equipped with AI can attempt far more intrusions than would be possible manually. AI systems do not become tired or distracted. They can search continuously for weaknesses and test multiple entry points at once. What once required an experienced team can now be partially automated by one individual.
Why AI generated social engineering is so difficult to spot
Not all cyberattacks rely on technical vulnerabilities. Many depend on manipulating people. Social engineering is one of the most common and effective methods used by attackers because it targets human trust rather than computer code.
Artificial intelligence has changed social engineering dramatically. Commercial AI writers can produce messages that sound polished, professional, and authentic. Emails contain natural grammar, accurate tone, and specific references to real events or organizational details. Security researchers have observed that AI generated phishing messages often produce higher engagement than traditional phishing attempts because they lack the obvious mistakes that once made malicious messages easy to recognize.
Attackers in the criminal underground also use AI tools created specifically for fraud. Security companies have studied models such as WormGPT and KawaiiGPT which have no safety restrictions. These tools can generate malware fragments, financial scams, and highly convincing business impersonation emails. Even though these underground models are less advanced than established commercial systems they are tuned for misuse and therefore produce content that can be extremely deceptive.
AI generated voice and video technology is advancing as well. As these tools mature attackers will be able to impersonate colleagues, executives, or family members with convincing realism. This makes social engineering not only more efficient but also more personal.
How attackers use AI to uncover security flaws
Software often contains mistakes and among these mistakes are vulnerabilities that attackers can exploit. Artificial intelligence assists both defenders and attackers by speeding up the process of identifying these weaknesses.
The Microsoft and OpenAI report noted that state aligned groups used AI to analyze how software behaves and to identify functions that might contain bugs. AI served as an on demand tutor. Instead of reading through complex technical documents attackers could ask direct questions about how a piece of software works and what might cause it to fail.
Academic researchers have conducted controlled studies to understand how AI models behave when asked to help produce proof of concept exploits. They found that with carefully phrased instructions AI systems could generate code demonstrating security issues even when the user lacked deep knowledge. Responsible companies limit harmful requests, but creative attackers often bypass restrictions by dividing their work into small, seemingly unrelated tasks. This mirrors the strategy used in the Anthropic case and illustrates how challenging it is to block every form of indirect misuse.
How AI is reshaping modern malware development
Modern malware constantly evolves to avoid detection. Security tools learn to recognize known malicious patterns, so attackers must create new variations of their code. Artificial intelligence accelerates this process by generating many versions of a payload. Even if the versions perform the same malicious action the differences can confuse detection systems.
AI can also expand the attack surface through automation features. Researchers demonstrated this risk with the Skills system in Claude which allows the AI to run automated tasks inside business environments. They showed that if a Skill is tampered with or created with malicious intent it could cause Claude to download and execute ransomware. This example highlights the importance of securing AI ecosystems themselves because attackers can now target the tools that organizations rely on.
Why this new threat matters for everyone
The rise of AI assisted cybercrime affects every part of society. Individuals face a greater risk of falling for convincing phishing attempts. Messages that once looked suspicious now appear credible and personal. As AI generated voice and video tools improve impersonation attacks will become even more convincing.

Businesses face increasing pressure because the speed and volume of attacks continue to grow. Automated tools can scan networks, search for vulnerabilities, and launch targeted phishing campaigns far faster than human attackers ever could. Security teams often struggle to keep up because they must protect every system while attackers need to find only one flaw.
Governments must also adapt. State sponsored groups can use AI to enhance espionage or disrupt essential services. The Anthropic case demonstrated that AI can automate large portions of a sophisticated cyber operation. Even if AI is not fully autonomous it can handle enough tasks to raise serious concerns about the future of digital conflict.
Public trust is also at risk. When AI can imitate writing styles or generate convincing voices people may struggle to trust digital communication. Instructions that appear to come from executives or coworkers may be difficult to verify. Scams may sound more urgent and personal than ever before.
Preparing for an AI shaped threat landscape
Despite these challenges progress is possible. Companies that develop AI systems are strengthening their guardrails and improving their ability to detect misuse. They analyze patterns of behavior and take action against suspicious activity.
Individuals can protect themselves by approaching unexpected messages with caution. Requests for sensitive information or urgent financial actions should always be confirmed through a trusted channel. Strong passwords and multi factor authentication remain essential.
Organizations can invest in security awareness training and adopt monitoring tools that use artificial intelligence to detect anomalies. Because attackers now use AI to increase their speed defenders must use AI to enhance their ability to respond.
Governments and research institutions are exploring frameworks for responsible AI development. These frameworks aim to encourage transparency and help companies consider the societal risks of releasing powerful tools.
Moving forward with intelligence and responsibility
Artificial intelligence is a dual use technology. It enables creativity, efficiency, and innovation but it also creates new tools for misuse. The examples from Microsoft, OpenAI, Anthropic, and the security research community illustrate that attackers are already using AI in meaningful ways and that their capabilities are expanding quickly. This does not mean AI will replace human hackers. It does mean that malicious activity will continue to grow faster and become more sophisticated.
Understanding how hackers use AI is the first step toward building stronger defenses. With education, collaboration, and responsible development society can reduce the risks while still enjoying the benefits of artificial intelligence. The future will depend not only on the advancement of AI but also on the choices we make about how to manage and secure it.



