Artificial intelligence is changing everything, including the way cybercriminals operate. A new report from Google’s Threat Intelligence Group (GTIG) reveals how hackers are now using AI tools to create malware that can rewrite and evolve itself in real time.
This is not the usual story of attackers using AI to write phishing emails or fake content. Google’s latest findings show something much more advanced. It found malware that “thinks” for itself and keeps changing to stay hidden.
One of the most worrying discoveries in the report is a new malware strain called PROMPTFLUX. It uses Google’s own Gemini AI to modify its code as it runs. The malware has a built-in module named “Thinking Robot” that constantly asks the AI to rewrite parts of its program so antivirus systems cannot recognize it.
In simple terms, the malware can keep learning and changing, just like an evolving creature. It does not need its creators to manually update it. That makes it much harder to detect or stop.
Another example is PROMPTSTEAL, which was linked to a Russian hacker group known as APT28. This one uses AI to generate commands for stealing files and system data. Instead of storing those commands inside the malware, it creates them on the fly using AI prompts.
FRUITSHELL gives an attacker remote command access to a compromised machine. It contains hard-coded prompts meant to evade LLM-based detection systems. QUIETVAULT uses on-host AI CLI tools and AI prompts to search for additional secrets and prepare exfiltration. It steals GitHub and NPM tokens and other secrets. PROMPTLOCK uses an LLM to generate and run Lua scripts at runtime to perform actions like encryption.
Hackers have also learned to trick AI systems into giving them restricted information. Some pretend to be cybersecurity students or developers testing code. They use clever prompts to bypass filters and get the AI to produce harmful scripts or tools.
According to Google, threat groups from China, Iran, North Korea, and Russia have all been caught using AI models for phishing, exploit research, and even spying.
There is now an underground market full of AI-based hacking tools. Many of these are sold just like normal software, with pricing plans, updates, and even customer support. Some can create phishing pages, find system vulnerabilities, or generate fake voices and videos.
Google’s report shows that AI has become a big part of cybercrime. Attackers are not just using it to save time; they are building smarter tools that can do things humans cannot.
Google says it has already removed accounts and blocked tools linked to these malicious activities. It has also updated Gemini’s safeguards so it can better detect and reject dangerous or misleading prompts.
This report is a snapshot of how cybersecurity is entering a new phase. AI is no longer just a tool for developers and businesses; it is now part of the hacker’s toolkit, too.











