MIT Develops Innovative Camera-Based Touch Sensor for Robots

In a significant advancement for the field of robotics, researchers at the Massachusetts Institute of Technology (MIT) have engineered a camera-based touch sensor that mimics the shape and functionality of a human finger. The development, called GelSight Svelte, is a long, curved sensor that provides high-resolution tactile sensing over a large area, a leap forward in the world of electronics and robotics.

Traditional robotic hands employ small, flat tactile sensors in their fingertips to gather information about objects they grasp. However, this design limits the type of grasping and manipulation tasks these robots can perform. The GelSight Svelte sensor, on the other hand, is designed to mimic the sensory receptors in human skin that run along the entire length of each finger. This allows for a more human-like grasp of objects, not limited to a simple pinching motion.

The researchers, led by mechanical engineering graduate student Alan (Jialiang) Zhao, constructed the finger-shaped sensor with a flexible backbone. By monitoring how the backbone bends when the finger touches an object, they can estimate the force being exerted on the sensor. This innovation in coding and programming languages has enabled the creation of a robotic hand that can grasp heavy objects much like a human would, using all three fingers’ entire sensing area.

The challenge in developing such a sensor lies in overcoming the limitations of cameras used in tactile sensors. These cameras are restricted by size, lens focal distance, and viewing angles. Zhao and senior author Edward Adelson tackled this issue using two mirrors that reflect and refract light toward a single camera located at the base of the finger. The GelSight Svelte incorporates one flat, angled mirror and one long, curved mirror that redistribute light rays from the camera in such a way that it can view along the entire finger’s length.

To perfect the mirror’s shape, angle, and curvature, the researchers used software to simulate reflection and refraction of light. The mirrors, camera, and two sets of LEDs for illumination are attached to a plastic backbone and encased in a flexible skin made from silicone gel. The camera views the back of the skin from the inside, and based on the deformation, it can see where contact occurs and measure the object’s contact surface geometry.

Moreover, the red and green LED arrays give a sense of how deeply the gel is being pressed down when an object is grasped. The researchers use this color saturation information to reconstruct a 3D depth image of the object being grasped. The sensor’s plastic backbone also provides proprioceptive information, such as the twisting torques applied to the finger. Machine learning is used to estimate how much force is being applied to the sensor based on these backbone deformations.

The design was tested by pressing objects like a screw to different locations on the sensor to check image clarity and see how well it could determine the object’s shape. The researchers also built a GelSight Svelte hand that can perform multiple grasps, including pinch grasp, lateral pinch grasp, and a power grasp. This allows for more versatility in a robotic hand, enabling it to hold heavier objects more stably.

Looking ahead, the researchers plan to enhance the GelSight Svelte so the sensor can bend at the joints, more like a human finger. This innovation marks a significant step forward in the field of electronics and robotics, opening up new possibilities for manipulation tasks that robots could perform. The research will be presented at the IEEE Conference on Intelligent Robots and Systems.