Author: CJ_Blockchain
On February 3, 2025, a model called DeepSeek-R1 quietly launched on the National Supercomputing Internet Platform.
In the following month, due to its performance directly comparable to top-tier closed-source models and training costs comparable to “cabbage prices,” it swept across the globe.
This triggered a crash in US AI stocks and marked the beginning of China’s AI “DeepSeek” era.

On March 10, 2026, Bittensor’s Subnet 3 Templar announced the completion of the largest decentralized large language model (LLM) pretraining run in history—Covenant-72B.
This is the largest decentralized large language model pretraining run in history:
Bittensor has entered its own DeepSeek era.
Templar originated from SN3 operated by Omega Labs, initially focusing on multimodal data collection and mining. As the Bittensor mechanism evolved, this subnet made a strategic leap from “data transporter” to “model creator.”
Currently, Templar positions itself as the infrastructure for global distributed large model pretraining. It aggregates heterogeneous computing power worldwide through incentive mechanisms, aiming to solve the extremely high computational costs and centralization issues in large model training. The successful delivery of Covenant-72B validates the maturity of this decentralized production model.
Covenant-72B is a milestone achievement from Templar and the largest dense architecture pretraining model in decentralized networks to date.
Training a 72B-scale model over the regular internet faces the biggest challenge of communication bandwidth bottlenecks between nodes. Templar made a qualitative breakthrough with the core algorithm SparseLoCo:
This technical approach proves that even without expensive InfiniBand clusters, top-tier intelligence can be produced relying solely on global distributed ordinary networks.
Templar’s technological achievements have attracted attention from mainstream AI circles and capital markets:
Jack Clark, co-founder of Anthropic, classified Templar as the world’s largest active decentralized training network in his analysis report, noting its development speed exceeded industry expectations.
Jason Calacanis (host of All-In Podcast and well-known Silicon Valley investor) recently detailed Bittensor’s mechanism in his blog and hinted at buying opportunities.
Grayscale continues to increase its holdings of TAO, positioning it as a core asset in the decentralized AI track.
DCG has established Yuma, focusing on accelerating Bittensor (TAO) ecosystem development, seen as DCG’s biggest and most direct bet on decentralized AI.

$TAO: After Templar announced the completion of the 72B model training, TAO surged over 30%, demonstrating strong performance amid BTC’s oscillations.
$Templar (SN-3): Templar rose 75% in 7 days, hailed as the current “emission capture dragon” of Bittensor. Its current market cap is only $70 million.

Templar’s success opens new horizons for the Bittensor ecosystem:
Templar current MC=75m, FDV=350m
Compared to mainstream large model companies: OpenAI valued at $840 billion, Anthropic at $350 billion, Minimax at $45 billion.
While Templar may not directly rival these giants, in a current environment of narrative scarcity, declining attention, and skepticism towards decentralization, Templar’s emergence is undoubtedly a strong boost for decentralized AI.
Templar demonstrates that decentralization can do more than store data—it can produce intelligence. Covenant-72B is just the beginning. With the vertical integration of SN3 (pretraining), SN39 (computing power), and SN81 (reinforcement learning), a blockchain-based, decentralized prototype of OpenAI is already emerging.
Since its inception, the crypto industry has debunked countless narratives. While decentralized storage, computing, and networks once seemed promising, they now appear discredited. Nonetheless, some projects continue steadfastly on the path of decentralization and have achieved results.
Templar’s success is not only Bittensor’s DeepSeek moment but perhaps also the crypto industry’s DeepSeek moment.