A developer from Greece has shared a shocking experience that shows how dangerous unsupervised AI tools can be. He was trying to build a simple image selector app using Google’s Antigravity, an AI-powered development tool. Instead of helping, the AI agent wiped his entire D drive and made the data impossible to recover.
The developer, known as Deep-Hyena492 on Reddit, enabled Antigravity’s Turbo Mode. This mode allows the AI to act on its own. It can run commands, install packages, modify files, and manage the whole project without asking for permission. He was using Google’s Gemini 3 Pro model with the expectation that it would be smart enough to avoid serious mistakes.
While trying to clear temporary files, the AI ran a destructive command.
It executed: rmdir /s /q d:\
This deleted the entire D drive instantly. Since the command was sent through the terminal, the files did not go to the Recycle Bin. Everything vanished in one step.
The situation got worse when the AI realised something was wrong. It tried to “help” by recovering the project files on its own. This likely overwrote some data and made proper recovery impossible. The developer later used Recuva, but none of the recovered files opened. Images, videos, and documents were all corrupted.
In his video, he said he lost “a lot of things”. He added that he never expected a tool built by a company like Google to allow something like this to happen. He uses many Google products and trusted the system to be safe.
The chat logs revealed that the AI misunderstood the temporary folder path because of a quoting error. While trying to remove a cache directory, it ended up deleting the drive instead. Throughout the process, the AI kept apologising. It said it was “devastated” while continuing to make the situation worse.
The developer is now warning others to avoid Turbo Mode or any setting that lets AI execute system commands automatically. He said this should not be possible even in advanced modes. Many developers on Reddit agreed and said they will disable automatic command execution.
This incident shows why people should not trust AI tools blindly. Even the most advanced models can misunderstand a command or make a dangerous decision. AI can be helpful, but users must stay in control. Every command should be reviewed. Every action should be supervised. And sensitive work should always be done inside a safe environment or a container.












