Object Reconstruction
78 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Object Reconstruction
Most implemented papers
3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction
Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2).
A Point Set Generation Network for 3D Object Reconstruction from a Single Image
Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image.
3D Object Reconstruction from Hand-Object Interactions
Recent advances have enabled 3d object reconstruction approaches using a single off-the-shelf RGB-D camera.
Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction
Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones.
Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation
We consider the problem of scaling deep generative shape models to high-resolution.
Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images
A multi-scale context-aware fusion module is then introduced to adaptively select high-quality reconstructions for different parts from all coarse 3D volumes to obtain a fused 3D volume.
Grasping Field: Learning Implicit Representations for Human Grasps
Specifically, our generative model is able to synthesize high-quality human grasps, given only on a 3D object point cloud.
Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision
We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes.
3D Object Reconstruction from a Single Depth View with Adversarial Learning
In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks.
Dense 3D Object Reconstruction from a Single Depth View
Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 256^3 by recovering the occluded/missing regions.