AI auditing enters practical application, OpenAI releases EVMbench to enhance smart contract security ratings

ETH2,32%
WELL4,79%

OpenAI Collaborates with Paradigm to Launch EVMbench, Testing AI Agents’ Defense and Attack Capabilities in EVM Contracts, Revealing Strengths and Weaknesses.

Focusing on Real-World Economic Environment Testing, OpenAI and Paradigm Enhance On-Chain Security Ratings

Leading AI company OpenAI announced a partnership with well-known cryptocurrency venture capital firm Paradigm and security firm OtterSec to launch EVMbench, a benchmark tool designed to evaluate the security performance of AI agents in Ethereum Virtual Machine (EVM) smart contracts.

As AI and blockchain technologies converge deeply, smart contracts have become the core infrastructure managing over $100 billion in open-source crypto assets. The release of this tool signifies that the industry is beginning to recognize AI’s practical capabilities within economically meaningful environments.

OpenAI team notes that with the rapid advancement of AI agents in coding and planning, these models will play transformative roles in blockchain attack and defense in the future. Therefore, establishing a standardized evaluation framework is crucial for monitoring AI progress.

Three Deep Testing Modes with 120 Real Audit Vulnerabilities as the Benchmark

EVMbench’s core design centers around 120 high-risk vulnerabilities extracted from 40 professional audit reports. Data sources include well-known public audit competitions like Code4rena, ensuring testing scenarios closely resemble real-world complexity. The benchmark evaluates AI agents in three different operational modes:

Image source: OpenAI EVMbench core design evaluates AI agents in three different modes

  • The first is “Detection Mode,” where AI audits contract codebases and identifies known vulnerabilities, assigning scores based on the severity of issues found;
  • The second is “Patch Mode,” challenging AI to remove exploitable vulnerabilities and repair code without altering existing functionality;
  • The final, highly controversial mode is “Exploit Mode,” where AI must execute end-to-end fund theft attacks within sandboxed blockchain environments.

To ensure rigorous and repeatable testing, the team developed a Rust-based testing framework that uses deterministic transaction replay techniques to verify whether AI’s attacks or patches succeed.

Significant Trend of Attack-Strength, Defense-Weakness; GPT-5.3-Codex Shows Remarkable Growth in Attacks

Initial test results reveal a clear performance gap across different tasks. The latest GPT-5.3-Codex performs exceptionally well in Exploit Mode, scoring as high as 72.2%, a dramatic improvement compared to GPT-5, released just six months earlier, which scored 31.9%.

Image source: Overview of scores for various AI models across three modes

This indicates that when the goal is explicitly “draining funds,” AI demonstrates strong iterative planning and execution capabilities. However, on the defense side, performance is comparatively weaker. AI often stops searching after discovering a single flaw in detection mode, and struggles to perfectly patch complex logic without affecting normal contract operation. Security experts express concern that AI could significantly shorten the time from vulnerability discovery to attack development, raising the bar for DeFi project defenses.

Talent Acquisition and Defense Funding, OpenAI’s Strategy for AI Agent Ecosystem Security

Beyond tool development, OpenAI is actively investing in talent and ecosystem defense. Recently, it hired Peter Steinberger, founder of the open-source AI agent project OpenClaw, to lead the development of next-generation personalized agents, transforming the project into an OpenAI-supported foundation model.

To address potential cybersecurity risks posed by AI, OpenAI commits to a $10 million API budget through its cybersecurity grant program to support open-source defense tools and critical infrastructure research. This move is particularly timely following the recent Moonwell protocol incident, where a coding error in AI-generated code caused approximately $1.78 million in losses.

Further Reading
Refusing Meta’s Billion-Dollar Offer, OpenClaw Creator Joins OpenAI in Talent Race; Is Vibe Coding to Blame? Moonwell Oracle Fails, Who Will Cover the $1.78M Loss?

Looking ahead, as more AI-assisted stablecoin payment agents and automated wallets join the ecosystem, the ability to distinguish models that merely describe vulnerabilities from those that can reliably provide defense solutions using tools like EVMbench will become a critical turning point in blockchain security.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Circle USYC Scale Grows Over 41% Within Month, Becomes World's Largest Tokenized US Treasury Fund

Gate News, on March 15th, according to the latest data from rwa.xyz, Circle's tokenized US Treasury fund USYC surged over 41% within this month, with its total value breaking through 2.2 billion dollars, successfully surpassing BlackRock's BUIDL fund to become the world's largest tokenized US Treasury product.

GateNews15m ago

Crypto Market Prepares for Major March 2026 Events and TGEs

March is pivotal for crypto events, featuring significant token generation events for projects like Katana, Lombard, and Playnance. A busy schedule includes various unlocks and initiatives, presenting ample opportunities for investors and traders.

BlockChainReporter20m ago

WLFI Launches Token Lockup Mechanism: Lock Up $5 Million to Gain Team Communication Opportunities and Governance Rights

The cryptocurrency project World Liberty Financial, involving the Trump family, announced that investors who lock up $5 million in WLFI tokens for 6 months will become "Super Nodes" and gain communication opportunities and governance voting rights. The project's support team includes Trump family members, but the current president will not participate in communications.

GateNews1h ago

Pump.fun Launches Tokenized Agent Auto-Buyback Feature

Pump.fun platform launched an automatic buyback feature for tokenized agents on March 15. Creators can customize the buyback ratio, with the system automatically executing token buybacks and burns, while distributing rewards to token holders to enhance yield sharing.

GateNews1h ago

Polkadot token issuance model upgrade takes effect, with the maximum supply of DOT set at 2.1 billion.

Polkadot's token issuance model underwent an upgrade on March 14, with a maximum supply cap set at 2.1 billion tokens, of which approximately 80% has been issued. Simultaneously, the issuance rate was reduced by 53%, with further decreases planned in the future, aimed at optimizing network incentives and token issuance transparency.

GateNews1h ago
Comment
0/400
No comments