Object Reconstruction

78 papers with code • 0 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction

chrischoy/3D-R2N2 2 Apr 2016

Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2).

A Point Set Generation Network for 3D Object Reconstruction from a Single Image

fanhqme/PointSetGeneration CVPR 2017

Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image.

3D Object Reconstruction from Hand-Object Interactions

dimtziwnas/InHandScanningICCV15_Reconstruction ICCV 2015

Recent advances have enabled 3d object reconstruction approaches using a single off-the-shelf RGB-D camera.

Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction

chenhsuanlin/3D-point-cloud-generation 21 Jun 2017

Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones.

Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images

hzxie/Pix2Vox 22 Jun 2020

A multi-scale context-aware fusion module is then introduced to adaptively select high-quality reconstructions for different parts from all coarse 3D volumes to obtain a fused 3D volume.

Grasping Field: Learning Implicit Representations for Human Grasps

korrawe/grasping_field_demo 10 Aug 2020

Specifically, our generative model is able to synthesize high-quality human grasps, given only on a 3D object point cloud.

Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision

xcyan/nips16_PTN NeurIPS 2016

We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes.

3D Object Reconstruction from a Single Depth View with Adversarial Learning

Yang7879/3D-RecGAN 26 Aug 2017

In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks.

Dense 3D Object Reconstruction from a Single Depth View

Yang7879/3D-RecGAN-extended 1 Feb 2018

Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 256^3 by recovering the occluded/missing regions.