Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
The Anthropic–OpenAI feud and their Pentagon dispute expose a deeper problem with AI safety
Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: Trump has an AI data center problem ahead of the midterms…Don’t trust AI to file your taxes…Anthropic’s AI tool Claude is central to US campaign in Iran, amid a bitter feud.
Recommended Video
The debate around AI safety often focuses on the technology itself—how powerful models might become, or what risks they might pose. But the conflict this week involving Anthropic, OpenAI and the Pentagon points to a deeper problem: how much power over the future of AI is concentrated in the hands of a small number of corporate leaders and government officials deciding how these systems are built, deployed, and used.
For years, critics of the industry have warned about the risk of “industrial capture”—a future in which the development of powerful AI systems is concentrated among a handful of companies working closely with governments, leaving the safety of those systems dependent on the incentives and rivalries of the people running them. In 2023, for example, researcher Yoshua Bengio said the potential for the AI sector to be controlled by a few companies was the “number two problem” behind the existential risks posed by the technology.
So it’s not particularly reassuring to read yesterday about the disdain Anthropic CEO Dario Amodei expressed towards OpenAI CEO Sam Altman in leaked memo Amodei wrote to employees on Friday. Amodei angry missive, which was apparently sent over Anthropic’s Slack to all its employees, came after OpenAI announced a deal to provide AI to the Pentagon and Secretary of War Pete Hegseth said he was declaring Anthropic a “supply chain risk” for failing to come to a similar deal with his department.
Amodei called OpenAI’s messaging “mendacious,” “safety theater,” and “an example of who they really are,” while describing many of Altman’s comments as “straight up lies” and “gaslighting.”
Altman has taken his own public shots at Anthropic. He recently called one of the company’s Super Bowl campaigns “clearly dishonest” and accused it of “doublespeak.” And the rivalry has become visible in more symbolic ways as well: At a recent summit, Altman and Amodei went viral for refusing to hold hands for a group photo with Prime Minister Narendra Modi.
With the US government taking little action to regulate AI—and international efforts on AI safety largely stalled—the world has effectively been relying on self-regulation by the industry. Both OpenAI and Anthropic have publicly supported that paradigm and signed voluntary safety commitments. They have also collaborated at times to run independent safety evaluations of one another’s models prior to those models being released.
But when the leaders of the two most influential AI labs so obviously can’t seem to get along, and the competition between them is so fierce, it raises an uncomfortable question: how much cooperation on safety can we realistically expect?
The pressure of competition has already impacted both companies when it comes to AI safety. Anthropic recently revised its Responsible Scaling Policy to say it would no longer unilaterally hold back from developing a new model simply because it did not yet know how to make that model safe. And OpenAI has made its own adjustments, removing explicit bans on military and warfare uses from its policies in 2024, and shifting its focus from safety research to product development to the point that former superalignment lead Jan Leike (who left for Anthropic in mid-2024) wrote on X that at OpenAI “safety culture and processes have taken a backseat to shiny products.”
The current safety approach assumes that companies and governments will ultimately act with restraint. But the future of AI safety may ultimately depend on how a small number of powerful players navigate the pressures of competition, geopolitics, and the occasional Silicon Valley soap opera.
With that, here’s more AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
FORTUNE ON AI
Why Leopold Aschenbrenner’s AI hedge fund is betting big on power companies and bitcoin miners to fuel the ‘superintelligence’ race – by Sharon Goldman
OpenAI sees Codex users spike to 1.6 million, positions coding tool as gateway to AI agents for business – by Jeremy Kahn
Korean startup wrtn is on track to pass $100M in annual recurring revenue, riding a loneliness epidemic-fueled boom in AI entertainment – by Nicolas Gordon
AI IN THE NEWS
Trump has an AI data center problem ahead of the midterms. CNBC and others reported that President Trump is facing a growing political dilemma as the U.S. races to build energy-hungry AI data centers ahead of the 2026 midterms. The infrastructure needed to power the AI boom is driving concerns about rising electricity prices and strain on the grid, prompting backlash from voters and local communities. In response, major tech companies—including OpenAI, Microsoft, Google, Amazon, Meta, and Oracle—have pledged to cover the energy and infrastructure costs associated with their AI data centers so that consumers don’t see higher utility bills. The voluntary agreement, promoted by the White House as a way to ease voter concerns, reflects a broader tension: policymakers want the economic and geopolitical advantages of rapid AI expansion, but the enormous electricity demands of the technology are creating political and environmental pressures that are becoming harder to ignore.
Don’t trust AI to file your taxes. In results that should surprise no one, a test by The New York Times found that AI is no match for the US tax code, highlighting an important limitation of today’s AI chatbots: they still struggle with tasks that require precise, multi-step reasoning. To assess the technology’s ability to file a federal income tax return, the paper tested four AI chatbots — Google’s Gemini, OpenAI’s ChatGPT, Anthropic’s Claude and xAI’s Grok — to see how well they fared with eight fictional tax situations. They struggled, hard, miscalculating the refund or amount owed to the Internal Revenue Service by an average of more than $2,000. Even when provided with all the necessary materials, including all the forms they needed to fill out, the chatbots whiffed on some calculations. The problem reflects a fundamental limitation of large language models: they are designed to predict likely words rather than precisely track complex, interconnected information, making them strong at writing and summarization but weaker at procedural tasks like tax filing. Experts say the systems may improve with additional reasoning tools and verification layers, but for now they work best as assistants rather than replacements—another reminder that even as AI reshapes industries from coding to medicine, some seemingly simpler tasks remain surprisingly difficult.
Anthropic’s AI tool Claude is central to US campaign in Iran, amid a bitter feud. A new report from The Washington Post highlights how quickly AI has moved from experimentation to the battlefield. According to the paper, the US military used an AI-enabled targeting system called Maven Smart System—built by Palantir and incorporating Anthropic’s Claude model—to help identify and prioritize targets during recent U.S. operations in Iran, accelerating what once took weeks of military planning into near-real-time decision making. Yet the deployment comes amid a bitter dispute between Anthropic and the Pentagon over limits on how its technology can be used in warfare, including concerns about autonomous weapons and mass surveillance. The episode underscores both the growing strategic importance of frontier AI systems and the tension between government demand for rapid deployment and companies’ attempts to set safety boundaries.
EYE ON AI NUMBERS
$25 billion
That’s how much annualized revenue OpenAI was generating as of the end of last month, according to reporting by The Information—a 17% jump from the $21.4 billion annualized run rate it had at the end of the year, according to two people familiar with the figures.
OpenAI still brings in more revenue than its closest rival, Anthropic, though the gap is quickly narrowing. Anthropic’s annualized revenue recently topped $19 billion, nearly triple what it was at the end of last year and up 36% in just the past two weeks.
OpenAI calculates annualized revenue by multiplying the previous four weeks of revenue by 12. One source said that if the company instead extrapolated from revenue spikes in the most recent week alone, its annualized run rate would be closer to $30 billion.
Anthropic’s rapid growth has been fueled in part by strong demand for its coding-focused AI models, which have helped the company quickly narrow the revenue gap with OpenAI. As recently as 2025, OpenAI was generating roughly three times as much revenue as Anthropic.
AI CALENDAR
**March 2-5: **Mobile World Congress, Barcelona, Spain.
**March 12-18: **South by Southwest, Austin, Texas.
**March 16-19: **Nvidia GTC, San Jose, Calif.
April 6-9: HumanX, San Francisco.
**Join us at the Fortune Workplace Innovation Summit **May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.