The paper uses deep neural networks to improve the state-of-the-art in dexterous grasping of previously unseen objects using a single view camera.

Paper

You can access the manuscript via this link. To the best of our knowledge, this is the first learned generative-evaluative architecture for dexterous grasping. The generative model is a contact-based method that learns from demonstration. The evaluative model is a deep neural network (Resnet-50 or VGG-16 based) that ranks the generated grasps.

Code

DDG is based on a carefully tuned dexterous grasp simulator, currently supporting the DLR-II hand. It is built on the MuJoCo physics engine. You can re-run simulated grasps and record videos using this tool. Click here to learn how to install the codebase and use it to re-run grasps. The ability to generate novel grasps on new scenes is coming soon.

Dataset

The data-hungry evaluative models are trained on simulated grasps. We released the two datasets that we used to train evaluative models, containing over 2M grasps. Click here to download the GM1 dataset (1M), and here to download GM2 dataset (1M). Refer to the Readme files for on how to use the datasets.

Generative Model

DDG uses a state-of-the-art grasp generator to sample plausible grasps, given the point cloud of an object. Both generators used in this paper are from Golem Robot Control Framework.

PacMan Project

This work was supported by the PacMan project, where we built a dishwasher-loading humanoid robot. Click here for details.

Contact

Have any questions, comments? Please don't hesitate to contact us via e-mail.