The first bit of news out of the Automate conference this year arrives by way of Alphabet X spinout Intrinsic. The firm announced at the Chicago event on Monday that it is incorporating a number of Nvidia offerings into its Flowstate robotic app platform.
That includes Isaac Manipulator, a collection of foundational models designed to create workflows for robot arms. The offering launched at GTC back in March, with some of the biggest names in industrial automation already on board. The list includes Yaskawa, Solomon, PickNik Robotics, Ready Robotics, Franka Robotics and Universal Robots.
The collaboration is focused specifically on grasping (grabbing and picking up objects) — one of the key modalities for both manufacturing and fulfillment automation. The systems are trained on large datasets, with the goal of executing tasks that work across hardware (i.e. hardware agnosticism) and with different objects.
That is to say methods of picking can be transferred to different settings, rather than having to train every system for every scenario. As humans, once we figure out how to pick things up, that action can be adapted to different objects in different settings. For the most part, robots can’t do that — not for now, at least.
“In the future, developers will be able to use ready-made universal grasping skills like these to greatly accelerate their programming processes,” Intrinsic founder and CEO Wendy Tan White said in a post. “For the broader industry, this development shows how foundation models could have a profound impact, including making today’s robot-programming challenges easier to manage at scale, creating previously infeasible applications, reducing development costs, and increasing flexibility for end users.”
Early Flowstate testing occurred in Isaac Sim — Nvidia’s robotic simulation platform. Intrinsic customer Trumpf Machine Tools has been working with a prototype of the system.
“This universal grasping skill, trained with 100% synthetic data in Isaac Sim, can be used to build sophisticated solutions that can perform adaptive and versatile object grasping tasks in sim and real,” Tan White says of Trumpf’s work with the platform. “Instead of hard-coding specific grippers to grasp specific objects in a certain way, efficient code for a particular gripper and object is auto-generated to complete the task using the foundation model.”
Intrinsic is also working with fellow Alphabet-owned DeepMind to crack pose estimation and path planning — two other key aspects of automation. For the latter, the system was trained on more than 130,000 objects. The company says the systems are able to determine the orientation of objects in “a few seconds” — an important part of being able to pick them up.
Another key piece of Intrinsic’s work with DeepMind is the ability to operate multiple robots in tandem. “Our teams have tested this 100% ML-generated solution to seamlessly orchestrate four separate robots working on a scaled-down car welding application simulation,” says Tan White. “The motion plans and trajectories for each robot are auto-generated, collision free, and surprisingly efficient – performing ~25% better than some traditional methods we’ve tested.”
The team is also working on systems that use two arms at once — a setup more in line with the emerging world of humanoid robots. It’s something we’re going to see a whole lot more of over the next couple of years, humanoid or not. Moving from one arm to two opens up a whole world of additional applications for these systems.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechCrunch – https://techcrunch.com/2024/05/06/alphabet-owned-intrinsic-incorporates-nvidia-tech-into-robotics-platform/