OpenClaw has announced a partnership with Google-owned VirusTotal to improve security on ClawHub, its skills marketplace. From now on, every skill uploaded to the platform will be scanned using VirusTotal’s threat intelligence, including its Code Insight feature.
Each skill is given a unique SHA-256 hash and checked against VirusTotal’s database. If the skill is not already known, it is scanned in detail. Skills marked as safe are approved automatically. Suspicious skills are shown with warnings. Any skill identified as malicious is blocked from download. OpenClaw has also said that all active skills will be re-scanned every day.
This step was clearly necessary. Over the past few weeks, security researchers found hundreds of malicious skills on ClawHub. Many of these skills are pretended to be useful tools. In reality, they were stealing data, opening backdoors, or installing malware. Some attacks worked without any user interaction. Others abused prompt injection to turn normal files, web pages, or messages into attack instructions.
The real issue is how powerful OpenClaw skills are. These skills often have deep access to the system. They can read files, send messages, manage credentials, and interact with multiple services. When a malicious skill is installed, the risk is not limited to one app. It can affect every system the agent can access.
This is why relying on trust or manual review was not enough. A skills marketplace for AI agents needs continuous and automated security checks. VirusTotal scanning is not optional in this kind of ecosystem. It is the minimum required to reduce obvious threats.
That said, OpenClaw has also admitted that this will not stop every attack. Prompt injection payloads can still be hidden in clever ways. In my view, this shows that AI agents introduce security problems that traditional tools were never designed to handle. These systems do not just run code. They interpret language and make decisions.
Another concern is how OpenClaw is being used inside companies. Many employees install it on work devices without approval from IT teams. This creates a Shadow AI problem. Once installed, these agents can operate outside normal security controls and monitoring.
Several serious flaws have already been reported. These include plaintext storage of credentials, exposed APIs, leaked tokens, and skills that reveal secrets through logs. In some cases, millions of authentication tokens and private messages were exposed due to simple configuration mistakes.
In my opinion, the biggest lesson here is about speed. OpenClaw became popular very quickly, but its security model did not mature at the same pace. When that happens, misconfiguration and abuse become inevitable. Powerful tools with weak defaults always attract attackers.
OpenClaw has promised more improvements, including a public threat model, a security roadmap, and a full audit of its codebase. These steps matter just as much as malware scanning. If AI agents are going to manage sensitive data and systems, strong guardrails must be built in from the start.







