Neural Rendering
142 papers with code • 0 benchmarks • 7 datasets
Given a representation of a 3D scene of some kind (point cloud, mesh, voxels, etc.), the task is to create an algorithm that can produce photorealistic renderings of this scene from an arbitrary viewpoint. Sometimes, the task is accompanied by image/scene appearance manipulation.
Benchmarks
These leaderboards are used to track progress in Neural Rendering
Libraries
Use these libraries to find Neural Rendering models and implementationsMost implemented papers
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location.
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.
PlenOctrees for Real-time Rendering of Neural Radiance Fields
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.
Deferred Neural Rendering: Image Synthesis using Neural Textures
Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline.
Zero-Shot Text-Guided Object Generation with Dream Fields
Our method, Dream Fields, can generate the geometry and color of a wide range of objects without 3D supervision.
Collaborative Neural Rendering using Anime Character Sheets
Drawing images of characters with desired poses is an essential but laborious task in anime production.
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
We have witnessed rapid progress on 3D-aware image synthesis, leveraging recent advances in generative visual models and neural rendering.
Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs
We present a simple neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations.
A Neural Rendering Framework for Free-Viewpoint Relighting
We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.
CONFIG: Controllable Neural Face Image Generation
Our ability to sample realistic natural images, particularly faces, has advanced by leaps and bounds in recent years, yet our ability to exert fine-tuned control over the generative process has lagged behind.