Add thelocalreport.in As A Trusted Source
Robots now see the world with an ease that was once limited only to science fiction. They can recognize objects, navigate cluttered spaces and sort thousands of parcels an hour. But ask a robot to touch something gently, safely, or meaningfully, and limitations quickly emerge.
As a researcher of soft robotics working on artificial skin and sensory bodies, I have found that trying to give robots the sense of touch forces us to appreciate how surprisingly sophisticated human touch really is.
My work began with the simple question of how robots could understand the world through their bodies. Develop touch sensors, completely cover the machine with them, process the signals and, at first glance, you should get something like touch.
Except that human touch is nothing like a simple pressure map. Our skin contains many different types of mechanoreceptors, each corresponding to different stimuli such as vibration, stretch, or texture. Our spatial resolution is remarkably fine and, importantly, touch is active: we constantly press, slide and adjust, transforming raw sensation into perception through dynamic interactions.
Engineers can sometimes mimic a fingertip-scale version of this, but reproducing it in an entire soft body, and giving a robot the ability to interpret this rich sensory stream, is a challenge of an entirely different order.
Another insight also immediately emerges from the work on artificial skin: much of what we call “intelligence” does not reside solely in the brain. Biology offers astonishing examples – most famously, the octopus.

Octopuses distribute most of their neurons throughout their limbs. Studies of their motor behavior show that an octopus arm can generate and adapt movement patterns locally based on sensory input, with limited input from the brain.
Their soft, obedient bodies contribute directly to how they function in the world. And this type of distributed, embodied intelligence, where behavior emerges from the interaction of the body, materials, and environment, is becoming increasingly influential in robotics.
Touch is also the first sense that a human being develops in the womb. Developmental neuroscience shows that tactile sensitivity emerges around eight weeks of gestation, then spreads throughout the body during the second trimester. Long before vision or hearing are reliably functioning, the fetus explores its surroundings through touch. It is thought that this will help shape how infants begin to develop an understanding of weight, resistance and support – the basic physics of the world.
This distinction also matters for robotics. For decades, robots have relied heavily on cameras and lidar (a sensing method that uses pulses of light to measure distance) while avoiding physical contact. But we can’t expect machines to achieve human-level capabilities in the physical world if they rarely experience it through touch.
Simulation can teach a robot useful behaviors, but without actual physical exploration, it risks merely deploying intelligence rather than developing it. To learn like humans, robots need bodies that feel.
intelligent body
One approach my group is exploring is to give robots a degree of “local intelligence” in their sensing bodies. Humans benefit from the compliance of soft tissues: the skin deforms in such a way that grip is increased, friction is increased and sensory signals are filtered before they reach the brain. It is a form of intelligence that is directly embedded in anatomy.
Research in soft robotics and morphological computation argues that the body can relieve some of the brain’s workload. By building robots with soft structures and low-level processing, so that they can adjust grip or posture based on tactile feedback without waiting for central commands, we hope to create machines that interact more safely and naturally with the physical world.
Healthcare is one area where this capability can make a profound difference. My group recently developed a robotic patient simulator for the training of occupational therapists (OTs). Students often practice on each other, making it difficult to learn the fine touch skills involved in supporting someone safely. With real patients, trainees must balance functional and affectionate touch, respect personal boundaries, and recognize subtle signs of pain or discomfort. Research on social and emotional touch shows how important these signals are to human well-being.
To help trainees understand these interactions, our simulator, known as MONA, generates realistic behavioral responses. For example, when an OT applies pressure to a simulated pain point in the artificial skin, the robot responds verbally and with a small physical “twitch” of the body to mimic the discomfort.
Similarly, if the trainee tries to move a limb more than the simulated patient can tolerate, the robot tightens or resists, giving a realistic signal that the movement should be stopped. By capturing tactile interaction through artificial skin, our simulator provides feedback never before available in OT training.
robots that care
In the future, robots with safe, sentient bodies could help relieve increasing pressures in social care. As the population ages, many families are suddenly raising, relocating, or supporting relatives without formal training. “Care robots” will help with this, potentially meaning a family member can be cared for at home for longer periods of time.
About the author
Perla Maillino is Associate Professor of Engineering Science, member of the Oxford Robotics Institute at the University of Oxford. This article is republished from Conversation Under Creative Commons license. read the original article,
Surprisingly, progress in developing these types of robots has been much slower than early expectations – even in Japan, which introduced some of the first care robot prototypes. One of the most advanced examples is Airek, a humanoid robot developed as part of the Japanese government’s Moonshot program to assist in nursing and elder care tasks. This multidisciplinary programme, launched in 2019, aims for “ambitious R&D based on bold ideas” to “build a society in which humans can live free from the limitations of body, mind, space and time by 2050”.
However, it remains difficult to translate research prototypes into regulated robots worldwide. High development costs, strict security requirements, and the absence of a clear commercial market have slowed progress. But although the technical and regulatory hurdles are substantial, they are still being addressed.
Robots that can safely share close physical space with people need to sense and control how they touch anything that comes in contact with their bodies. This whole-body sensitivity is what will differentiate the next generation of soft robots from today’s rigid machines.
We are still a long way from robots that can handle these intimate tasks independently. But the creation of touch-enabled machines is already reshaping our understanding of touch. Every step toward robotic tactile intelligence highlights the extraordinary sophistication of our own bodies—and the deep connection between sensation, movement, and what we call intelligence.
This article was commissioned in conjunction with the Professors Programme, part of Prototypes for Humanity, a global initiative that showcases and accelerates academic innovation to solve social and environmental challenges. The Conversation is a media partner of Prototypes for Humanity 2025.










