Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers and lift the box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box. Humans intuitively adapt their responses to many variations such that a box can rest against fingers, arms and torso.
Robots, in contrast, typically struggle to plan such full-body manipulations. Whenever a contact is made or broken, the mathematical equations that describe the robot’s motion are suddenly restarted. Thus, the movements corresponding to each possible pattern of contacts must be calculated independently, quickly leading to an intractable computing task.
An artificial intelligence method called “reinforcement learning” has been used to plan contact-rich manipulations, smoothing the sudden changes in the dynamic equations caused by contact. However, this process still requires many different outcomes to be calculated.
Now, by smoothing only specific contact-sensitive parts of the model equations, researchers at the Massachusetts Institute of Technology have discovered how to achieve the effects of reinforcement learning without the need to compute large numbers of full trajectories.
This work is funded in part by the U.S. National Science Foundation; a paper reporting the results appears in IEEE Transactions on Robotics.
While still in its early stages, this method could enable factories to use smaller, mobile robots that can manipulate objects with their entire arms or bodies. In addition, the technique could allow robots sent on space exploration missions to adapt to the environment quickly, using only an onboard computer.
The researchers designed a simple model that focuses on core robot-object interactions. They combined their model with an algorithm that can rapidly and efficiently search through all possible decisions the robot could make. With this combination, the computation time was cut down to about a minute on a standard laptop.
The researchers tested their approach in simulations where robotic hands were given tasks like moving a pen, opening a door or picking up a plate. In each instance, the model-based approach achieved the same performance as other techniques, but in a fraction of the time. The investigators saw similar results when they tested their model in hardware on real robotic arms.
“In contrast to the remarkable language capabilities of AI chatbots, robots still fall far short of humans in routine physical tasks — like carrying a bulky box in their arms,” said Jordan Berg, a program director in NSF’s Directorate for Engineering. “The results reported in this paper are a big step toward closing that gap.”
In the future, the researchers plan to enhance their technique so that it can plan dynamic motions such as throwing a ball or other object while imparting a high spin.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Nsf.gov – https://new.nsf.gov/news/ai-helps-robots-manipulate-objects-their-whole