Inverse Rendering
63 papers with code • 1 benchmarks • 3 datasets
Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.
Most implemented papers
ADOP: Approximate Differentiable One-Pixel Point Rendering
Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud.
Extracting Triangular 3D Models, Materials, and Lighting From Images
We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations.
Intrinsic Image Decomposition via Ordinal Shading
We encourage the model to learn an accurate decomposition by computing losses on the estimated shading as well as the albedo implied by the intrinsic model.
SfSNet: Learning Shape, Reflectance and Illuminance of Faces in the Wild
SfSNet learns from a mixture of labeled synthetic and unlabeled real world images.
RenderNet: A deep convolutional network for differentiable rendering from 3D shapes
We present RenderNet, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes.
Differentiable Monte Carlo Ray Tracing through Edge Sampling
We introduce a general-purpose differentiable ray tracer, which, to our knowledge, is the first comprehensive solution that is able to compute derivatives of scalar functions over a rendered image with respect to arbitrary scene parameters such as camera pose, scene geometry, materials, and lighting parameters.
InverseRenderNet: Learning single image inverse rendering
By incorporating a differentiable renderer, our network can learn from self-supervision.
Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF from a Single Image
Our inverse rendering network incorporates physical insights -- including a spatially-varying spherical Gaussian lighting representation, a differentiable rendering layer to model scene appearance, a cascade structure to iteratively refine the predictions and a bilateral solver for refinement -- allowing us to jointly reason about shape, lighting, and reflectance.
Differentiable Surface Splatting for Point-based Geometry Processing
We propose Differentiable Surface Splatting (DSS), a high-fidelity differentiable renderer for point clouds.
Deep Single-Image Portrait Relighting
In this work, we apply a physically-based portrait relighting method to generate a large scale, high quality, "in the wild" portrait relighting dataset (DPR).