April 22, 2026 ChainGPT

Google patches Antigravity RCE via AI prompt injection — crypto devs warned to harden tools

Google patches Antigravity RCE via AI prompt injection — crypto devs warned to harden tools
Google patches Antigravity flaw that let attackers run code via AI prompt injection Google has closed a serious security hole in Antigravity, its AI-powered development environment, after researchers found a way to turn a file search into remote code execution. The vulnerability, disclosed by Pillar Security, stemmed from Antigravity’s find_by_name file-search tool passing user input straight to a command-line utility with no validation — a classic recipe for command injection. According to Pillar, an attacker could stage a malicious script inside a workspace (Antigravity is able to create files as part of its permitted actions) and then trigger that script simply by crafting search input that the underlying shell would interpret as a command. The researchers demonstrated the risk by creating a test script and using the search tool to execute it — the exploit opened the computer’s calculator, proving the search function could be weaponized as a command execution mechanism. Crucially, the issue bypassed Antigravity’s Secure Mode, the product’s strictest setting. Antigravity — launched by Google last November — uses autonomous agents to help developers write, test and manage code. Pillar Security reported the weakness to Google on January 7; Google acknowledged the report the same day and marked the issue as fixed on February 28. Prompt injection attacks like this occur when hidden or specially crafted instructions embedded in files or text are interpreted by an AI system as legitimate commands. Because developer tools routinely ingest external code, docs, and assets, that makes them especially vulnerable: malicious input can trigger actions on a user’s machine without further user interaction or direct access. The risk of prompt injection grabbed wider attention last year after OpenAI warned that ChatGPT agents — when granted access to websites and connectors — could be compromised and access sensitive data. Pillar’s findings underscore a bigger problem for agentic, autonomous tooling: simple input sanitization isn’t enough. “Every native tool parameter that reaches a shell command is a potential injection point,” Pillar wrote, urging the industry to adopt execution isolation instead of relying solely on sanitization. They argue auditing for this class of vulnerabilities must become standard practice before shipping agentic features at scale. For developers — including those building crypto and blockchain applications who may rely on AI-assisted IDEs — the incident is a reminder to treat AI-native tooling as an additional attack surface and to demand stronger isolation and auditing from vendors. Read more AI-generated news on: undefined/news