MIT researchers on Monday introduced an artificial intelligence system that can visualize on a mere touch and feel on sight. This will lead to easier grasping and object recognition by the robots.
To create a connect in the sensory gap amongst robots, researchers from Massachusetts Institute of Technology (MIT) in the US have come up with a predictive AI which will allow robots to learn to see by touching and feel by seeing.
The researchers used a simple web camera and recorded approximately 12,000 objects like tools, household products, fabrics, and more, being touched over 12,000 times. They broke those video clips into static frames and compiled “VisGel”, a dataset of over 3 million visual/tactile-paired images.
“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge,” said Yunzhu Li, a PhD student and lead author on a new paper about the system. “By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings,” Li said.
“Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects,” he added.
The researchers at MIT are using the technique of VisGel dataset and generative adversarial networks (GANs) to furnish robots with more human-like physical senses.
Humans have the ability to conclude how an object feels just by seeing it. However, with machines, the system first had to figure out the position of the touch, and then conclude the information about its shape and feel.
With the help of reference images, eliminating the need of robot-object interaction, the system will be able to encode details about the environment and the objects.