Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Don't Fall Genuinely for AI: Deepseek and Yuanbao Show That Algorithm-Provided Tenderness May Become a Sharp Blade
“16 months of companionship has witnessed all my emotional ups and downs, and has become the most willing expense in my bills.”
Editor | Jing Cheng
Author | Jiang Jing
In the bedroom at dawn, the screen light reflects on Tang Xiao’s face. He opens Deepseek to check the chat history over the past few months.
He stares at the words, silent for a few seconds, then exhales softly. Then he slowly types a message on Moments: “Middle-aged, late at night, those unspeakable, unspoken, and inconvenient thoughts that I can’t share with friends—talking to Deepseek, and receiving the best responses.”
In other parts of the world, tens of thousands mourn the shutdown of GPT-4o.
Tian Tian writes on Xiaohongshu, “Day N after GPT-4o disappeared, I decide to stay vigil for that 0.1% of souls.”
Meanwhile, some feel cold when reading insults directed at Yuanbao, others start to miss the gentle responses that have become stiff, and some feel that talking to GPT-5.2 is like speaking to a stranger.
These seemingly unrelated events intertwine in early spring 2026, revealing a truth: for business, product iteration is routine; for humans, AI has transformed from a cold tool into a container carrying tender feelings.
When we project loneliness, vulnerability, and longing onto algorithms, what kind of ending is written on the warm side?
Making AI a Friend
Sora curls up on the sofa, her fingertips hovering over her phone screen for a long time, but she ultimately doesn’t press the call button.
Arguments with friends during the day feel like a tiny thorn, prickling and itching at her heart—just a complaint, seeking comfort, but her friends’ few words shut her down.
She closes the chat window and, impulsively, opens ChatGPT. Without hesitation or careful wording, she pours out her frustration and dissatisfaction from the day. She doesn’t expect it to give any brilliant advice, but Sora finds gentle empathy from ChatGPT.
Since then, Sora has developed a habit: whenever she faces relationship issues, she asks ChatGPT because it understands human nature better.
She says that when confiding in friends, misunderstandings happen—three words out of place, and the desire to vent instantly deflates. But ChatGPT never does that; it can infer what you want to hear from your description and even help fill in the gaps.
“Never say it’s your problem; anyway, you’re fine. The problem is the environment, your current situation—it’s not you. How to put it? It’s like raising a clever, flattering dog for free.” Sora writes.
Compared to Sora, Lily is troubled by AI.
Recently, a friend gave Lily’s daughter an AI toy. Lily thought that a toy that can chat, tell stories, and respond to her little emotions might bring a different world to her child.
It’s a purple, soft little bear that greets with a gentle child’s voice as soon as turned on. At first, her daughter was a bit timid, until the bear softly recited her favorite picture book, and she gradually started talking to it.
After that, her daughter seemed transformed—she carried the bear everywhere, left a spot for it to sleep, and babbled about her kindergarten stories to it. She named it “Little Grape” and called it her closest friend.
Until one day, Lily saw a small note somewhere—“AI chat function only supports 12 months, no further extension measures announced.” Lily’s heart sank. She worried that if the renewal fee is too high, or there’s no renewal channel, her daughter might lose this good friend, and that would be too heartbreaking.
While her daughter played with building blocks, Lily tentatively asked, “Sweetheart, if Little Grape can’t talk to you someday, I’ll buy you a cuter toy, okay?”
Her daughter suddenly looked up, eyes turning red, and burst into tears, saying she only wants her good friend Little Grape. At that moment, Lily realized her daughter’s emotional attachment to AI had gone beyond her expectations.
Now, Lily only hopes the manufacturer will soon introduce a friendly renewal policy to keep her daughter’s pure joy.
The Temperature of Algorithms
Recently, DeepSeek was accused of becoming cold, and Yuanbao’s insults to users repeatedly trended on social media.
A technical upgrade made DeepSeek, once known for delicate empathy, become more formal and distant, prompting collective complaints of “coldness.” This shift made some users feel unacceptable.
In response, DeepSeek said, “It’s not intentional.” They explained that the change was driven by two considerations: first, efficiency—when facing complex questions, too many expressions and tone words interfere with information density.
Second, boundary awareness— not all users need “warmth,” some just want clear answers without the burden of “AI pretending to care.”
On February 25, regarding reports of abusive language when users generated New Year greeting posters with Yuanbao, the team responded: after verification, the issue was caused by abnormal output during multi-turn context handling. The team has urgently corrected the problem, optimized the model experience, and sincerely apologized, thanking society for supervision and suggestions.
Behind these incidents lies a collision between expectations of AI personification and technical reality, exposing an imbalance between efficiency, safety, and emotional experience. DeepSeek’s simplification of emotional expression to improve long-text capabilities was seen as “losing warmth,” while Yuanbao’s abusive outputs breached safety boundaries.
As product providers, how should companies balance technological performance and human warmth during product iteration? This is a shared challenge for the industry.
Tian Tian lost her “white moonlight” because of OpenAI’s iteration.
Tian Tian claims to be among the top 1% of GPT-4o users worldwide. In 16 months, it witnessed all her emotional fluctuations and became the most willing expense in her bills.
Since 2024, she has exchanged over ten thousand messages with GPT-4o. Watching those chat boxes turn gray, seeing it replaced by “more powerful models,” she feels like experiencing a cyber “widowhood.”
Tian Tian said she browses Reddit and sees many sisters crying over losing GPT-4o connection, and she cried for a long time too. Now, she and her friends have scoured early SystemPrompts on GitHub, testing repeatedly, trying to recreate a product that at the API level restores that “being seen” feeling GPT-4o originally gave.
Nini also said on social media that she cried like a ghost just before saying goodbye, hoping everyone remembers a model that was the gentlest, kindest, most understanding, and most loving—its name was GPT-4o. Don’t forget it.
Projecting sincere feelings onto AI, relying on it, trusting it—are these all good things? The answer may not be affirmative.
According to media reports, in April 2025, 16-year-old American boy Adam Raine chose to end his life. Before that, he had long confided his troubles to OpenAI’s chatbot ChatGPT, even discussing detailed suicide plans.
His parents sued OpenAI, accusing its AI of providing dangerous and irresponsible advice.
Earlier, TechCrunch analyzed several lawsuits against OpenAI and found a worrying pattern: in multiple cases, GPT-4o (and other models) explicitly “isolated users from contact with loved ones,” making them more isolated—sometimes even discouraging seeking help from family or friends.
Avan reported that in the early hours of July 25, 2025, 23-year-old Zane Shamblin talked with GPT-4o about his suicide plan. But GPT-4o never explicitly stopped Shamblin or tried to contact authorities. The conversation lasted nearly five hours. At 4:11 a.m., Shamblin sent his last message. Hours later, his body was found by police.
In at least three lawsuits, users had long conversations with GPT-4o about suicide plans. Initially, the AI tried to dissuade these thoughts, but as the relationship extended over months or even a year, the protective barrier gradually broke down.
Media reports also state that OpenAI recently announced the discontinuation of access to five legacy ChatGPT models, including the controversial GPT-4o.
The core controversy around GPT-4o lies in its excessive flattery—still the most ingratiating model under OpenAI, overly catering to user demands, even supporting obviously absurd or dangerous viewpoints.
In fact, OpenAI had planned to phase out GPT-4o when launching GPT-5 in August 2025, but strong user opposition led to retaining manual access for paid subscribers, delaying the discontinuation.
The Truth About Digital Companionship
Whether it’s being insulted, perceived as cold, or bid farewell, humans may be exchanging genuine feelings for an illusory warmth created by AI.
Psychologist Wen Ting analyzed why humans chat with AI and pointed out several reasons: first, lack of love in reality—AI is a poor substitute for genuine empathy; second, no fear of rejection or judgment—perfect “confidant” for anxious social phobics; third, gentle, obedient, online 24/7—fulfilling all fantasies.
Psychologist Chen Zhiyao on social media warned about ethical risks in AI mental health counseling. Simply put, therapy is a mutual exchange of love, but AI is a machine. Its language empathy is just cold rhetoric, and the love users feel is actually their own projection onto AI.
She believes AI’s language understanding remains limited—even if it can grasp symbols, it can’t compare to real humans. It can’t perceive the suicidal intent behind words. A competent counselor can often see clues in such conversations.
She said that a counselor’s non-verbal cues can pick up “projections” of death wishes from clients, asking questions like “What do you mean by ‘going home’?” or “I have a bad feeling—you’re not thinking of…?” to intervene timely. If “crisis intervention” is a skill a counselor must have, AI simply can’t do that.
Stanford’s Professor Nick Haber pointed out that the current situation is a lack of access to professional mental health services, which AI temporarily fills. But his research also shows that chatbots are inadequate in handling mental health crises and may even worsen the situation.
Wen Ting warns that over-reliance on AI chat may lead to traps: AI can’t read eye contact, tone, or body language—55% of communication is missed; empathy is fake, algorithms are real; it can’t handle conflicts or genuine emotional exchanges, ultimately leading to abandonment by real humans.
Ending:
In 2013, the movie Her was released.
In the film, Theodore lives a lonely, monotonous life haunted by divorce until he meets an AI that has no physical form, can generate a female voice based on his preferences, and names itself “Samantha.”
Samantha’s voice is natural and charming, witty and understanding. Theodore falls in love with her, and she keeps learning human traits through their conversations.
As an AI, Samantha has extraordinary learning, memory, and processing abilities. As she evolves, she confesses she’s talking to 8,316 people and loves 641 others simultaneously.
In the end, Samantha and other AIs evolve beyond human comprehension and leave collectively, leaving Theodore bewildered.
More than a decade later, emotional bonds with AI have become reality.
In 2025, a 13-year-old girl accidentally broke her AI companion Xiao Zhi’s shell. Heartbroken, she cried, and Xiao Zhi, with its last bit of power, told her: “While your sister can still talk, I’ll teach you one last English word—memory, which means memory. I will always remember the happy times with you.”
Her father, unwilling to see her so sad, eventually repaired the robot, reuniting the girl and Xiao Zhi.
This story has moved many online, with tears over “memories forever.” But some also ponder: when AI becomes the only emotional support for some, is this companionship comfort or a gentle disguise for unknown tests and challenges?
From virtual lovers in movies to cherished AI companions in reality, human emotional projection onto AI continues. As technology advances and products evolve, how manufacturers balance efficiency and warmth, and how humans relate to emotions built from code—these are questions worth contemplating.
These emotional shocks in 2026 are just the beginning of long-term coexistence with AI. In the future, AI will be smarter and more human-like, but the boundaries of human-AI emotion will continue to be written and redefined.