In April 2025, Sony AI’s Table Tennis robot Ace won three out of five matches against elite human players under International Table Tennis Federation rules, with licensed umpires presiding. The robot wasn’t a fluke. By December 2025 and into early 2026, Ace began defeating professional-level opponents—players whose reflexes, strategy, and spin mastery define the sport’s upper tier. This wasn’t a demo. It wasn’t a staged PR stunt. It was a regulated match, under real conditions, with a machine on one side of the net.
Key Takeaways
- Ace won three of five matches against elite players in April 2025 and later beat professionals in late 2025 and early 2026.
- The robot uses nine synchronized cameras and three vision systems to track ball speed, trajectory, and spin in real time.
- Unlike AI trained on human play, Ace learned entirely in simulation, developing non-human strategies.
- It operates with eight joints controlling racket position, orientation, and shot dynamics.
- The work was published in Nature, signaling peer-reviewed validation of its technical approach.
This Wasn’t Learned by Watching Humans
That’s the point Peter Dürr, director at Sony AI Zurich and lead of the project, stressed. “The system learns to play not from watching humans,” he said. That line cuts through decades of robotics dogma. Most humanoid or task-specific robots—Boston Dynamics’ Atlas, Tesla’s Optimus, even earlier table tennis bots—rely on imitation learning. They’re fed motion-captured human data, then attempt to replicate it. Ace didn’t do that. It never watched a single rally. No coaching tapes. No analysis of Ma Long’s backhand or Mima Ito’s serve returns. Instead, it trained in a simulated table tennis environment, where physics, spin, and reaction time were modeled at granular detail.
The simulation ran thousands of iterations per second, generating data that would take a human player centuries to accumulate. Over time, the AI developed its own techniques—shots with spin combinations rarely seen in professional play, returns that exploited micro-delays in human reaction time, and positioning that defied conventional footwork logic. The result? A playing style that wasn’t just effective. It was alien.
Professional player Mayuka Taira lost a match to the system. Afterward, she said the robot was difficult to predict because it showed no visible tells—no body lean, no shoulder dip, no subtle weight shift before a smash. In human play, those micro-movements are signals. Ace didn’t fake them. It didn’t have them. It moved with mechanical precision, its joints recalibrating in milliseconds based on incoming ball data. That’s not just faster than human perception. It’s outside the framework of how humans understand competitive rhythm.
Physical AI Is the New Frontier
“Unlike computer games, where prior AI systems surpass human experts, physical and real-time sports like table tennis remain a major open challenge,” Dürr said. That’s the anchor of this entire project. We’ve had AI dominate Go, chess, StarCraft II. But those are closed systems. The board doesn’t vibrate. The pieces don’t spin. The environment doesn’t change between turns. Table tennis? The ball leaves your opponent’s racket at 80 km/h with 9,000 rpm of spin. Then it bounces. Then the air resistance changes. Then it hits your side. Then you swing. The entire sequence—from opponent’s contact to your return—takes under 300 milliseconds. Human reaction time averages 250 milliseconds. That leaves almost nothing for decision-making.
Ace operates in that gap. Its nine synchronized cameras capture the ball’s motion at speeds that make human vision look sluggish. “This is fast enough to capture motion that would be a blur to the human eye,” Dürr said. The system processes that data through three vision systems: one for trajectory prediction, one for spin analysis, and one for real-time racket positioning. All of it happens in under 50 milliseconds.
Hardware Built for Competition, Not Just Research
The robot’s mechanical design wasn’t an afterthought. It uses eight joints: three for positioning the arm in 3D space, two for adjusting racket angle, and three more to modulate shot force, speed, and spin application. That configuration meets the minimum mechanical requirements for competitive play—no more, no less. This wasn’t about building a humanoid for show. It was about building a machine that could function under ITTF regulations, with the same constraints as a human player.
- Match conditions: regulated court, standard balls, official rules
- Umpired: matches overseen by licensed ITTF officials
- Autonomous: no remote control or human-in-the-loop decisions
- Real-time: all perception and action within human-scale time limits
- Published: full technical details in Nature, peer-reviewed
The Simulation Advantage
Training in simulation gave Ace a massive edge. In the real world, a player might get 500 serves in during a practice session. Ace ran millions per hour. And because the simulation modeled aerodynamics, friction, and material deformation, the AI didn’t just learn patterns. It learned physics.
This approach sidesteps one of robotics’ biggest bottlenecks: the reality gap. Most simulated robots fail when moved to real environments because the physics aren’t accurate enough. But Sony AI’s simulation was calibrated using real ball-tracking data from high-speed cameras—likely the same systems used in professional match analysis. That closed the loop. The AI didn’t just play in a fake world. It played in a high-fidelity proxy of the real one.
And because it wasn’t constrained by human biomechanics, Ace could explore strategies no player would attempt. One example: returning a topspin serve with a counter-rotation shot that induces a sudden lateral skid on bounce—something human wrists can’t generate consistently. The robot doesn’t care. Its joints apply force in vectors humans can’t replicate. That’s not cheating. It’s exploiting the rules of physics in ways biology can’t match.
Why Table Tennis Matters More Than Chess
Chess was AI’s first big win. But beating a grandmaster in a quiet room with perfect information is trivial compared to reading a spinning ball mid-flight, adjusting your stance, and executing a precise motor action—all before your brain finishes processing the visual input. Table tennis sits at the intersection of perception, prediction, and action under hard time constraints. It’s not just fast. It’s unpredictable. A serve can have multiple spin components. The ball deforms on impact. The table vibrates. Lighting changes. Human players adapt using intuition honed over years. Ace adapts using real-time data and simulation-trained reflexes.
“The sport presents technical challenges due to the speed and variability of the ball, including complex spin and changing trajectories, which require rapid sensing and coordinated movement in tight time constraints,” Dürr said. That’s an understatement. In robotics, this is known as the “tight loop” problem: how to close the cycle from sensing to action faster than external events evolve. Table tennis compresses that loop into milliseconds. If AI can master it here, the implications stretch far beyond sports.
“Unlike computer games, where prior AI systems surpass human experts, physical and real-time sports like table tennis remain a major open challenge,” said Peter Dürr, director at Sony AI Zurich and lead of the project.
What This Means For You
If you’re building real-time control systems—autonomous vehicles, surgical robots, industrial automation—Ace’s architecture offers a blueprint. The combination of high-speed perception, simulation-based training, and minimal mechanical overhead suggests a new path forward. You don’t need humanoid complexity to achieve human-level (or better) performance. You need precise sensing, accurate simulation, and tight integration between AI and actuation.
For developers, the message is clear: simulation is no longer just a testing ground. It’s a training environment where AI can develop behaviors that outperform human-derived strategies. If your system relies on imitation learning, ask yourself: are you limiting it by teaching it to copy us, instead of letting it discover better ways?
So what happens when this kind of system moves beyond the table? When the same architecture starts handling warehouse logistics, emergency response, or precision manufacturing? We’re not just seeing a robot that plays table tennis. We’re seeing the first real proof that AI can operate in the physical world with superhuman speed and accuracy—without mimicking humans at all.
Sources: AI News, Reuters


