Reading time ( words)
Thousands of researchers from around the globe will be gathering—virtually—this week for the IEEE International Conference on Robotics and Automation.
As a flagship conference on all things robotics, ICRA has become a renowned forum since its inception in 1984. This year, Fox is the company’s senior director of robotics research and head of the NVIDIARobotics Research Lab, in Seattle, as well as a professor at the University of Washington Paul G. Allen School of Computer Science & Engineering and head of the UW Robotics and State Estimation Lab. At the NVIDIA lab, Fox leads over 20 researchers and interns, fostering collaboration with the neighboring UW.
He’s receiving the RAS Pioneer Award “for pioneering contributions to probabilistic state estimation, RGB-D perception, machine learning in robotics, and bridging academic and industrial robotics research.”
“Being recognized with this award by my research colleagues and the IEEE society is an incredible honor,” Fox said. “I’m very grateful for the amazing collaborators and students I had the chance to work with during my career. I also appreciate that IEEE sees the importance of connecting academic and industrial research — I believe that bridging these areas allows us to make faster progress on the problems we really care about.”
Fox will also give a talk at the conference, where a total of 19 papers that investigate a variety of topics in robotics will be presented by researchers from NVIDIA Research.
Here’s a preview of some of the show-stopping NVIDIA research papers that were accepted at ICRA:
Robotics Work a Finalist for Best Paper Awards
“6-DOF Grasping for Target-Driven Object Manipulation in Clutter” is a finalist for both the Best Paper Award in Robot Manipulation and the Best Student Paper.
The paper delves into the challenging robotics problem of grasping in cluttered environments, which is a necessity in most real-world scenes, said Adithya Murali, one of the lead researchers and a graduate student at the Robotics Institute at Carnegie Mellon University. Much current research considers only planar grasping, in which a robot grasps from the top down rather than moving in more dimensions.
Arsalan Mousavian, another lead researcher on the paper and a senior research scientist at the NVIDIA Robotics Research Lab, explained that they performed this research in simulation. “We weren’t bound by any physical robot, which is time-consuming and very expensive,” he said.
Mousavian and his colleagues trained their algorithms on NVIDIA V100 Tensor Core GPUs, and then tested on NVIDIA TITAN GPUs. For this particular paper, the training data consisted of simulating 750,000 robot object interactions in less than half a day, and the models were trained in a week. Once trained, the robot was able to robustly manipulate objects in the real world.
Replanning for Success
NVIDIA Research also considered how robots could plan to accomplish a wide variety of tasks in challenging environments, such as grasping an object that isn’t visible, in a paper called “Online Replanning in Belief Space for Partially Observable Task and Motion Problems.”
The approach makes a variety of tasks possible. Caelan Garrett, graduate student at MIT and a lead researcher on the paper, explained, “Our work is quite general in that we deal with tasks that involve not only picking and placing things in the environment, but also pouring things, cooking, trying to open doors and drawers.”
Garrett and his colleagues created an open-source algorithm, SS-Replan, that allows the robot to incorporate observations when making decisions, which it can adjust based on new observations it makes while trying to accomplish its goal.
They tested their work in NVIDIA Isaac Sim, a simulation environment used to develop, test and evaluate virtual robots, and on a real robot.
DexPilot: A Teleoperated Robotic Hand-Arm System
In another paper, NVIDIA researchers confronted the problem that current roboticsalgorithms don’t yet allow for a robot to complete precise tasks such as pulling a tea bag out of a drawer, removing a dollar bill from a wallet or unscrewing the lid off a jar autonomously.
In “DexPilot: Depth-Based Teleoperation of Dexterous Robotic Hand-Arm System,” NVIDIA researchers present a system in which a human can remotely operate a robotic system. DexPilot observes the human hand using cameras, and then uses neural networks to relay the motion to the robotic hand.
Whereas other systems require expensive equipment such as motion-capture systems, gloves and headsets, DexPilot archives teleoperation through a combination of deep learning and optimization.
It took 15 hours to train on a single GPU once we collected the data, according to NVIDIA researchers Ankur Handa and Karl Van Wyk, two of the authors of the paper. They and their colleagues used the NVIDIA TITAN GPU for their research.