UC San Diego Engineers Develop Touch-Based Robotic Hand

In a groundbreaking development, a team of engineers from the University of California San Diego has created a robotic hand that can manipulate objects solely through touch, without the need for visual input. This innovative technology, which allows the robotic hand to rotate a diverse range of objects smoothly, is inspired by the human ability to handle items without necessarily seeing them. The robotic hand can handle everything from small toys to cans, fruits, and vegetables without causing damage. This achievement could significantly aid the development of robots capable of manipulating objects in dark environments.

The research team presented their findings at the 2023 Robotics: Science and Systems Conference. The team’s approach involved attaching 16 touch sensors, each costing approximately $12, to the palm and fingers of a four-fingered robotic hand. The sensors’ primary function is to detect whether an object is in contact with it or not. This approach is unique because it relies on numerous low-cost, low-resolution touch sensors that use simple binary signals — touch or no touch — to facilitate robotic in-hand rotation.

The use of many sensors spread across a large area of the robotic hand sets this approach apart from others that rely on a few high-cost, high-resolution touch sensors attached to a small area of the robotic hand, mainly on the fingertips. Xiaolong Wang, a professor of electrical and computer engineering at UC San Diego who led the study, explained that these traditional approaches have several limitations. For instance, having fewer sensors on the robotic hand minimizes the chance that they will come into contact with the object, thereby limiting the system’s sensing ability. Moreover, high-resolution touch sensors providing texture information are complex to simulate and expensive, making them challenging for real-world applications. Furthermore, many of these approaches still rely heavily on vision.

The team’s solution is straightforward and effective. “We show that we don’t need details about an object’s texture to do this task. We just need simple binary signals of whether the sensors have touched the object or not, and these are much easier to simulate and transfer to the real world,” said Wang.

The large coverage of binary touch sensors provides the robotic hand with sufficient information about the object’s 3D structure and orientation to successfully rotate it without vision. The system was trained using simulations of a virtual robotic hand rotating various objects, including ones with irregular shapes. The system determines which sensors on the hand are being touched by the object at any given time during rotation. It also evaluates the current positions of the hand’s joints and their previous actions. Based on this information, the system instructs the robotic hand which joint needs to be moved where in the next step.

The system was then tested on a real-life robotic hand with objects it had not encountered before. The robotic hand successfully rotated various objects without stalling or losing its grip. The objects tested ranged from a tomato and pepper to a can of peanut butter and a toy rubber duck. Objects with more complex shapes took longer to rotate.

Wang and his team are now focusing on extending their approach to more complex manipulation tasks. They are currently developing techniques to enable robotic hands to catch, throw, and juggle. “In-hand manipulation is a very common skill that we humans have, but it is very complex for robots to master,” said Wang. “If we can give robots this skill, that will open the door to the kinds of tasks they can perform.”

This development in electronics and computer engineering signifies a significant leap in programming languages and coding used in robotics, bringing us one step closer to creating robots that can function effectively in dark environments.