Text to 3D
55 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Text to 3D models and implementationsMost implemented papers
DreamFusion: Text-to-3D using 2D Diffusion
Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss.
Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
Key to Fantasia3D is the disentangled modeling and learning of geometry and appearance.
Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
This unique combination of text and shape guidance allows for increased control over the generation process.
Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior
In this work, we investigate the problem of creating high-fidelity 3D content from only a single image.
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i. e., $7. 5$).
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image
In this paper, we present a novel diffusion model called that generates multiview-consistent images from a single-view image.
GeoDream: Disentangling 2D and Geometric Priors for High-Fidelity and Consistent 3D Generation
We justify that the refined 3D geometric priors aid in the 3D-aware capability of 2D diffusion priors, which in turn provides superior guidance for the refinement of 3D geometric priors.
Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting
Building upon our MVControl architecture, we employ a unique hybrid diffusion guidance method to direct the optimization process.
DreamView: Injecting View-specific Text Guidance into Text-to-3D Generation
Text-to-3D generation, which synthesizes 3D assets according to an overall text description, has significantly progressed.
Intelligent Home 3D: Automatic 3D-House Design from Linguistic Descriptions Only
To this end, we propose a House Plan Generative Model (HPGM) that first translates the language input to a structural graph representation and then predicts the layout of rooms with a Graph Conditioned Layout Prediction Network (GC LPN) and generates the interior texture with a Language Conditioned Texture GAN (LCT-GAN).