Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
Physical Intelligence Introduces MEM Architecture To Give Robots The Memory Needed For Real-World Tasks
In Brief
Researchers developed Multi-Scale Embodied Memory, a system that gives robots short- and long-term memory so they can track progress and complete complex tasks instead of just executing isolated actions
For years, the dream of a truly helpful household robot has been deceptively close. Robots can already follow commands like “wash the frying pan,” “fold the laundry,” or “make a sandwich.” In laboratory environments, these systems demonstrate impressive dexterity and precision. Yet despite rapid advances in robotic foundation models, something fundamental has been missing: memory.
A robot that can execute a single task is not the same as a robot that can complete a job. Cleaning an entire kitchen, cooking a meal, or preparing ingredients for a recipe requires more than isolated skills. It requires continuity — the ability to remember what has already been done, what still needs to happen, and where everything is located. Without that narrative thread, even the most capable robot becomes surprisingly incompetent.
This is the challenge researchers at Physical Intelligence are now trying to solve with a new architecture called Multi-Scale Embodied Memory (MEM) — a system designed to give robots both short-term and long-term memory so they can perform tasks that unfold over minutes instead of seconds.
The results hint at something important: the future of robotics may depend less on better mechanical hands and more on better cognitive architecture.
Modern robotic models already possess a remarkable library of motor skills. They can grasp fragile objects, manipulate tools, and navigate cluttered environments. But ask a robot to clean a full kitchen — wiping counters, putting groceries away, washing dishes, and organizing utensils — and the limitations quickly become obvious.
Human cognition does this effortlessly. Machines, until recently, have not. Storing every observation a robot sees for minutes or hours is computationally infeasible. But discarding that information leads to chaotic behavior — repeated mistakes, forgotten steps, or actions that contradict earlier decisions. In robotics research, this challenge is sometimes described as “causal confusion,” where systems misinterpret past events and reinforce the wrong behaviors.
The result: robots that look impressive in short demos but struggle to complete real-world tasks.
A Memory System For Physical Intelligence
The MEM architecture addresses this problem by introducing a multi-layered memory structure. Instead of storing everything equally, the system separates memory into two complementary forms:
Short-term visual memory captures recent observations using an efficient video-encoding architecture. This allows the robot to understand motion, track objects across frames, and remember events that happened seconds ago — crucial for precise actions like flipping a grilled cheese sandwich or scrubbing a dish.
Long-term conceptual memory, meanwhile, stores task progress in natural language. Rather than remembering raw visual data indefinitely, the robot writes brief textual “notes” describing what has happened — statements like “I placed the pot in the sink” or “I retrieved the milk from the fridge.”
These summaries become part of the robot’s reasoning process. In effect, the machine builds its own narrative of the task. The system’s reasoning engine then decides two things simultaneously: what action to perform next and what information is worth remembering. This combination allows the model to track tasks lasting up to fifteen minutes — far longer than most previous robotic demonstrations.
One of the most intriguing capabilities enabled by MEM is in-context adaptation. Robots make mistakes. That is inevitable. But most robotic systems repeat those mistakes endlessly because they have no memory of failure.
The difference becomes obvious in simple experiments. In one test, a robot attempts to pick up a flat chopstick. Without memory, the machine repeatedly tries the same unsuccessful grip. With memory enabled, the robot remembers the failed attempt and tries a different approach — eventually succeeding.
Another example involves opening a refrigerator. From visual data alone, the robot cannot immediately determine which direction the door opens. A memory-less system simply repeats the same action again and again. A memory-enabled robot tries one direction, remembers the failure, and then attempts the opposite side.
These small adjustments represent something profound: the ability to learn within the task itself. Instead of relying entirely on training data, the robot adapts on the fly.
Researchers evaluated the memory-enabled system on increasingly complex tasks. First came a relatively simple challenge: making a grilled cheese sandwich. This required short-term memory to manage timing while performing delicate physical steps like flipping bread and plating the sandwich.
Next came a logistical task: retrieving ingredients for a recipe. The robot had to remember which items it had already collected, where they were located, and whether drawers and cabinets had been closed. Finally came the most demanding scenario: cleaning an entire kitchen.
This meant putting objects away, washing dishes, wiping countertops, and tracking which parts of the room had already been cleaned.
The memory-augmented model significantly outperformed versions without structured memory, demonstrating greater reliability and task completion rates.
The difference illustrates a key shift in robotics.Instead of optimizing isolated actions, researchers are now building systems capable of sustained workflows.
Why Memory Is The Next Frontier In Robotics
The broader implication of MEM is that robotics is entering a new phase. For decades, the field focused on perception and control: helping machines see the world and manipulate objects. More recently, large multimodal models have dramatically improved robots’ ability to interpret instructions and execute complex motor behaviors.
But as those capabilities mature, the bottleneck has moved. The next challenge is cognitive continuity — enabling robots to operate over extended periods without losing track of their goals. Memory systems like MEM provide the scaffolding for that continuity. Instead of reacting moment by moment, robots can maintain an internal narrative about their actions, decisions, and environment. This narrative is what allows complex behavior to emerge.
If this approach continues to evolve, the implications extend far beyond cleaning kitchens. Future robots may need to follow instructions that unfold over hours or even days. Imagine telling a home assistant:
“I get home at 6 p.m. — please have dinner ready and clean the house on Wednesdays.”
Executing such a request would require parsing long instructions, planning subtasks, remembering progress, and adapting when things go wrong.
Maintaining a raw video history of every action for that long would be impossible. Instead, robots will likely rely on hierarchical memory systems, where experiences are compressed into increasingly abstract representations.
MEM is an early step toward that architecture.It suggests that the key to more capable robots may not be stronger motors or sharper sensors, but better memory — and the ability to reason about it. If robots can finally remember what they are doing, they may also finally be able to finish the job.