Grasp Generation
15 papers with code • 0 benchmarks • 3 datasets
Benchmarks
These leaderboards are used to track progress in Grasp Generation
Most implemented papers
Grasping Field: Learning Implicit Representations for Human Grasps
Specifically, our generative model is able to synthesize high-quality human grasps, given only on a 3D object point cloud.
6-DOF GraspNet: Variational Grasp Generation for Object Manipulation
We evaluate our approach in simulation and real-world robot experiments.
GRAB: A Dataset of Whole-Body Human Grasping of Objects
Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time.
Diffusion-based Generation, Optimization, and Planning in 3D Scenes
SceneDiffuser provides a unified model for solving scene-conditioned generation, optimization, and planning.
Orientation Attentive Robotic Grasp Synthesis with Augmented Grasp Map Representation
Inherent morphological characteristics in objects may offer a wide range of plausible grasping orientations that obfuscates the visual learning of robotic grasping.
Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes
Our novel grasp representation treats 3D points of the recorded point cloud as potential grasp contacts.
CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation
This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation.
OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction
We start to collect 1, 800 common household objects and annotate their affordances to construct the first knowledge base: Oak.
PEGG-Net: Pixel-Wise Efficient Grasp Generation in Complex Scenes
Vision-based grasp estimation is an essential part of robotic manipulation tasks in the real world.
Keypoint-GraspNet: Keypoint-based 6-DoF Grasp Generation from the Monocular RGB-D input
Great success has been achieved in the 6-DoF grasp learning from the point cloud input, yet the computational cost due to the point set orderlessness remains a concern.