• Home  
  • DAIMON Robotics Adds Touch to Robot Hands
- Robotics

DAIMON Robotics Adds Touch to Robot Hands

DAIMON Robotics released a new dataset in April 2026 to train robot hands with tactile feedback. The move targets physical AI’s biggest gap: touch. Details here.

DAIMON Robotics Adds Touch to Robot Hands

425 million. That’s the number of data samples in DAIMON Robotics’ new dataset, Daimon-InfiTouch, released in April 2026 to train AI models for robotic hands capable of fine tactile perception. The Hong Kong-based startup isn’t selling hardware. It’s betting that the bottleneck in dexterous robotics isn’t motors or grippers—it’s the absence of reliable touch feedback at scale.

Key Takeaways

  • DAIMON Robotics released 425 million tactile interaction samples in April 2026, the largest public dataset of its kind for robot hand sensing.
  • The dataset, Daimon-InfiTouch, is designed to train AI models that interpret pressure, slip, and texture—critical for manipulating fragile or irregular objects.
  • The company operates entirely in the physical AI space, focusing on sensor fusion and data infrastructure, not robotics hardware.
  • Training models on this scale could reduce reliance on expensive force-torque sensors and enable low-cost, high-dexterity robotic hands.
  • The dataset is hosted on ModelScope, Alibaba’s open model platform, signaling a strategic alignment with China’s AI ecosystem.

A Dataset, Not a Robot

When most robotics startups unveil progress, they roll out a new arm, a faster gripper, or a prototype that walks. DAIMON Robotics didn’t. In April 2026, they dropped a 1.2-terabyte archive of sensor logs, video frames, and motor commands—collected over 18 months from instrumented robotic hands handling over 3,000 real-world objects.

This isn’t supplemental data. It’s the product. The company’s entire pitch rests on the idea that the next leap in manipulation won’t come from better actuators, but from AI that understands physical contact. And to train that AI, you need data—massive, diverse, and labeled with ground-truth force measurements.

They’ve named it Daimon-InfiTouch, and it includes everything from the compression of a foam sponge to the moment a glass vial slips from a robotic fingertip. Each event is timestamped, cross-referenced with joint angles, and paired with RGB and thermal imaging. The dataset even logs micro-vibrations—sub-millimeter tremors that signal slipping before it happens.

The Missing Layer in Physical AI

Robotics has made strides in vision and path planning. But touch? Still primitive. Most industrial robots operate blind to contact. They rely on pre-programmed force limits or external sensors. When a human folds laundry or unplugs a cable, we use subconscious tactile feedback—adjusting grip, sensing tension, feeling texture. Robots can’t.

That’s because tactile data is noisy, high-dimensional, and notoriously hard to generalize. A sensor calibrated for one gripper fails on another. A model trained on cardboard boxes stumbles with silicone seals. DAIMON’s approach sidesteps this by generating data at a scale that forces generalization.

How the Data Was Collected

  • Data gathered from 12 identical robotic hands equipped with capacitive tactile sensors across all phalanges.
  • Each hand manipulated objects in 24-hour cycles, monitored by motion-capture cameras and force plates.
  • Objects included rigid (metal bolts, plastic cases), deformable (sponges, cables), and fragile items (eggs, lightbulbs).
  • Sensors sampled at 1 kHz, generating 8.4 million data points per hour per hand.

The volume isn’t just impressive—it’s strategic. By flooding the training environment with variation, DAIMON hopes to teach models to extract universal features of touch, not memorize specific interactions.

Why Touch Changes Everything

Consider a surgical robot suturing tissue. Vision alone can’t tell if the needle is catching too deep. A prosthetic hand picking up a child’s toy needs to know when to stop squeezing. Amazon’s warehouses still can’t automate shoe sorting because laces tangle, fabrics slip, and no two pairs are packed the same. These aren’t AI problems. They’re touch problems.

DAIMON’s dataset targets these edge cases. One sample shows a robotic finger slowly pressing into a latex glove while tracking the spread of surface strain. Another logs the exact millisecond a wet soap bar escapes grasp—along with the 14 millisecond warning signal from high-frequency vibration decay.

This level of granularity could enable reactive control loops that don’t exist today. Instead of programming every possible response, the AI learns to feel instability and react. That’s not incremental. It’s a shift from scripted manipulation to adaptive handling.

The Business of Invisible Infrastructure

DAIMON isn’t building a robot. They’re not selling sensors. Their revenue model? Licensing the dataset and offering fine-tuning services for enterprise clients. Early partners include a surgical robotics firm in Shenzhen and an automotive parts assembler in Nagoya—both struggling with delicate component handling.

It’s a quiet but shrewd play. The robotics industry spends billions on hardware. DAIMON is selling the software layer beneath it. And unlike hardware, datasets don’t depreciate. They compound in value as more developers build on them.

By hosting Daimon-InfiTouch on ModelScope, DAIMON aligns itself with Alibaba’s AI stack—a move that ensures visibility within China’s research and industrial AI pipelines. It also avoids the U.S. export controls that have stymied other robotics ventures.

What This Means For You

If you’re building robotic control systems, this dataset is a game-changer. You can now train tactile perception models without investing six figures in sensor arrays and data collection rigs. The sample rate, labeling quality, and diversity beat anything previously public—including work from CMU and MIT’s open datasets.

For AI developers, it’s a rare case of real-world physical data at internet-scale. The implications go beyond robotics: haptics for VR, prosthetics, even industrial IoT systems that detect mechanical wear through vibration signatures. The data’s already normalized and timestamped. You’ll spend less time cleaning and more time iterating.

But here’s the catch: the dataset assumes a specific sensor configuration. If your hardware doesn’t match DAIMON’s finger layout or sampling rate, transfer learning will be messy. This isn’t a universal solution—it’s a benchmark, and possibly a lock-in play.

One thing is undeniable: we’re entering an era where the best robotics companies might not make robots at all. They’ll make the data that teaches robots how to behave.

Competing Visions: Who Else Is Building the Sense of Touch?

DAIMON isn’t alone in targeting tactile intelligence, though its data-first strategy sets it apart. Other players are taking different paths. Shadow Robot Company in London has spent years refining its anthropomorphic hand, embedding hundreds of tactile sensors into each digit. Their approach is hardware-centric: build the most human-like hand possible, then collect data from it. But their datasets remain proprietary and limited in scale—fewer than 10 million labeled interactions as of 2025.

In contrast, researchers at UC Berkeley’s AUTOLAB have open-sourced the Dexterity-1M dataset, focusing on visual-tactile fusion using the GelSight sensor. While innovative, it covers just 1,500 object interactions and lacks the temporal depth of DAIMON’s logs. Google’s Robotics division experimented with self-supervised tactile learning in 2024 using a custom gripper, but the project was deprioritized after failing to generalize beyond lab conditions.

Startups like SynTouch in California sell high-fidelity tactile sensor arrays, but at a cost: a single fingertip sensor runs over $12,000. That pricing locks out all but well-funded labs. DAIMON’s model bypasses this by decoupling sensor hardware from AI training. Instead of selling expensive touch sensors, they’re offering the output—pre-processed, generalized tactile understanding—that smaller teams can simulate or approximate.

The divergence reflects a broader split in robotics. On one side: companies betting on bespoke hardware and vertical integration. On the other: those like DAIMON, treating perception as a software problem solvable with enough data. The winner may not be the one with the most sensitive fingertip, but the one with the largest behavioral library.

The Bigger Picture: Why Physical AI Is Accelerating Now

The timing of Daimon-InfiTouch isn’t accidental. Three converging trends make tactile AI viable today in a way it wasn’t even three years ago. First, compute power has caught up. Training on 1.2 terabytes of multi-modal sensory data would have cost over $200,000 in cloud compute in 2022. In 2026, it’s under $30,000—thanks to specialized AI chips from companies like Cambricon and Biren, now widely available in Chinese data centers.

Second, the rise of simulation-to-real (sim2real) transfer means models trained on synthetic data can now work in the physical world. DAIMON didn’t just collect real-world data—they pre-trained models in NVIDIA’s Isaac Lab using photorealistic tactile simulations before fine-tuning on actual sensor logs. This hybrid pipeline cut training time by 60% and improved generalization across object types.

Third, industry demand has hit an inflection point. Foxconn announced in early 2026 that it would automate 30% of final assembly tasks in iPhone production by 2028—many involving delicate flex cables and glass components. Meanwhile, the EU’s new Medical Device Regulation mandates haptic feedback for Class III surgical robots by 2027. These aren’t distant goals. They’re urgent technical challenges with budgets attached.

That pressure is shifting investment. In 2025, physical AI startups raised $870 million globally, up from $310 million in 2023, according to PitchBook. DAIMON’s seed round—$18 million led by Sequoia China and GGV Capital—was oversubscribed. Investors aren’t just betting on touch. They’re backing the idea that data infrastructure will define the next wave of robotics, just as maps and GPS defined the mobile era.

What This Means For You

If you’re building robotic control systems, this dataset is a game-changer. You can now train tactile perception models without investing six figures in sensor arrays and data collection rigs. The sample rate, labeling quality, and diversity beat anything previously public—including work from CMU and MIT’s open datasets.

For AI developers, it’s a rare case of real-world physical data at internet-scale. The implications go beyond robotics: haptics for VR, prosthetics, even industrial IoT systems that detect mechanical wear through vibration signatures. The data’s already normalized and timestamped. You’ll spend less time cleaning and more time iterating.

But here’s the catch: the dataset assumes a specific sensor configuration. If your hardware doesn’t match DAIMON’s finger layout or sampling rate, transfer learning will be messy. This isn’t a universal solution—it’s a benchmark, and possibly a lock-in play.

One thing is undeniable: we’re entering an era where the best robotics companies might not make robots at all. They’ll make the data that teaches robots how to behave.

The future of manipulation isn’t in stronger motors—it’s in smarter interpretation of touch.

What happens when the most valuable part of a robot isn’t its body, but the experience it’s been trained on?

Sources: IEEE Spectrum, original report

About AI Post Daily

Independent coverage of artificial intelligence, machine learning, cybersecurity, and the technology shaping our future.

Contact: Get in touch

We use cookies to personalize content and ads, and to analyze traffic. By using this site, you agree to our Privacy Policy.