Image to 3D

26 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Zero-1-to-3: Zero-shot One Image to 3D Object

cvlab-columbia/zero123 ICCV 2023

We introduce Zero-1-to-3, a framework for changing the camera viewpoint of an object given just a single RGB image.

One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization

no code yet • NeurIPS 2023

Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world.

Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors

guochengqian/magic123 30 Jun 2023

We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D meshes generation from a single unposed image in the wild using both2D and 3D priors.

Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose Estimation

ipl-uw/ZeDO-Release 7 Jul 2023

Learning-based methods have dominated the 3D human pose estimation (HPE) tasks with significantly better performance in most benchmarks than traditional optimization-based methods.

IPDreamer: Appearance-Controllable 3D Object Generation with Image Prompts

zengbohan0217/ipdreamer 9 Oct 2023

Recent advances in 3D generation have been remarkable, with methods such as DreamFusion leveraging large-scale text-to-image diffusion-based models to supervise 3D generation.

LRM: Large Reconstruction Model for Single Image to 3D

3dtopia/openlrm 8 Nov 2023

We propose the first Large Reconstruction Model (LRM) that predicts the 3D model of an object from a single input image within just 5 seconds.

Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable 2D Repainting

junwuzhang19/repaint123 20 Dec 2023

The core idea is to combine the powerful image generation capability of the 2D diffusion model and the texture alignment ability of the repainting strategy for generating high-quality multi-view images with consistency.

HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D

byeongjun-park/HarmonyView 26 Dec 2023

This work introduces HarmonyView, a simple yet effective diffusion sampling technique adept at decomposing two intricate aspects in single-image 3D generation: consistency and diversity.

Envision3D: One Image to 3D with Anchor Views Interpolation

pku-yuangroup/envision3d 13 Mar 2024

To address this issue, we propose a novel cascade diffusion framework, which decomposes the challenging dense views generation task into two tractable stages, namely anchor views generation and anchor views interpolation.

Isotropic3D: Image-to-3D Generation Based on a Single CLIP Embedding

pkunliu/isotropic3d 15 Mar 2024

As a result, with a single image CLIP embedding, Isotropic3D is capable of generating multi-view mutually consistent images and also a 3D model with more symmetrical and neat content, well-proportioned geometry, rich colored texture, and less distortion compared with existing image-to-3D methods while still preserving the similarity to the reference image to a large extent.