Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
ChatGPT voice mode will become more seamless with the new real-time model
Investing.com – According to The Information, OpenAI is developing a new audio model aimed at making conversations with ChatGPT feel less mechanical. The model allows AI to adjust its responses in real-time when interrupted.
Currently, the advanced voice mode in ChatGPT uses a turn-based system, requiring users to finish speaking before the AI processes the audio and generates a response. If users interrupt with words like “okay” or “uh-huh,” the model completely stops speaking instead of continuing the conversation naturally.
This new model, called BiDi (Bidirectional), is designed to continuously process the speaker’s voice so it can immediately adjust its responses when interrupted. Compared to existing audio models, this will make conversations more natural because current models produce fixed responses once the AI starts speaking, which cannot be changed.
However, the technology is not yet ready for release. According to someone familiar with the project, after a few minutes of conversation, the prototype often begins to malfunction or produce sounds that seem abnormal. Although OpenAI initially aimed to release BiDi in the first quarter of this year, the schedule may be delayed to the second quarter or later.
OpenAI believes narrowing the performance gap between speech and text-based models will expand AI usage worldwide, as most people find talking to AI assistants more natural than sending text messages.
The BiDi model is expected to be especially useful for customer support applications. For example, if a customer calling a retail AI support agent decides to change an item instead of returning it, BiDi could allow the agent to smoothly switch the conversation without stopping or becoming confused.
Someone familiar with the audio model also said it performs better when using external tools and applications. OpenAI has previously reported plans to improve its audio models for future AI-powered devices primarily operated through voice interactions, and is considering developing a smart speaker that can check emails or book services via voice commands.
This article was translated with the assistance of artificial intelligence. For more information, please see our Terms of Use.