Novel View Synthesis
327 papers with code • 17 benchmarks • 33 datasets
Synthesize a target image with an arbitrary target camera pose from given source images and their camera poses.
See Wiki for more introdcutions.
The Synthesis method include: NeRF, MPI and so on.
( Image credit: Multi-view to Novel view: Synthesizing novel views with Self-Learned Confidence )
Libraries
Use these libraries to find Novel View Synthesis models and implementationsDatasets
Most implemented papers
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.
NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.
NeRF--: Neural Radiance Fields Without Known Camera Parameters
Considering the problem of novel view synthesis (NVS) from only a set of 2D images, we simplify the training process of Neural Radiance Field (NeRF) on forward-facing scenes by removing the requirement of known or pre-computed camera parameters, including both intrinsics and 6DoF poses.
PlenOctrees for Real-time Rendering of Neural Radiance Fields
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.
View Synthesis by Appearance Flow
We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints.
Deferred Neural Rendering: Image Synthesis using Neural Textures
Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.
HoloGAN: Unsupervised learning of 3D representations from natural images
This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.
SynSin: End-to-end View Synthesis from a Single Image
Single image view synthesis allows for the generation of new views of a scene given a single input image.
Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
To this end, we propose Neural Body, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated.