HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork

NeurIPS 2023

Bipasha Sen⭐ 1️⃣, Gaurav Singh⭐ 1️⃣, Aditya Agarwal⭐ 1️⃣, Rohith Agaram1️⃣,
K. Madhava Krishna1️⃣, Srinath Sridhar2️⃣,
Equal Contribution, 1️⃣IIIT Hyderabad, 2️⃣Brown University

HyP-NeRF learns a prior over implicit NeRF functions!


Generating NeRFs using HyP-NeRF from different queries in a single forward pass!

Abstract

Neural Radiance Fields (NeRF) have become an increasingly popular representation to capture high-quality appearance and shape of scenes and objects. However, learning generalizable NeRF priors over categories of scenes or objects has been challenging due to the high dimensionality of network weight space. To address the limitations of existing work on generalization, multi-view consistency and to improve quality, we propose HyP-NeRF, a latent conditioning method for learning generalizable category-level NeRF priors using hypernetworks. Rather than using hypernetworks to estimate only the weights of a NeRF, we estimate both the weights and the multi-resolution hash encodings resulting in significant quality gains. To improve quality even further, we incorporate a denoise and finetune strategy that denoises images rendered from NeRFs estimated by the hypernetwork and finetunes it while retaining multiview consistency. These improvements enable us to use HyP-NeRF as a generalizable prior for multiple downstream tasks including NeRF reconstruction from single-view or cluttered scenes and text-to-NeRF. We provide qualitative comparisons and evaluate HyP-NeRF on three tasks: generalization, compression, and retrieval, demonstrating our state-of-the-art results.

Approach

HyP-NeRF is a latent conditioning method for learning improved quality generalizable category-level NeRF priors using hypernetworks. Our hypernetwork is trained to generate the parameters of both, the multi-resolution hash encodings (MRHE) and weights of a NeRF model of a given category conditioned on an instance code. For each instance code, in the learned codebook, HyP-NeRF estimates an instance-specific MRHE along with the weights of an MLP. Our key insight is that estimating both the MRHEs and the weights results in a significant improvement in quality. To improve the quality even further, we denoise rendered views from the estimated NeRF model, and finetune the NeRF with the denoised images to enforce multiview consistency. This denoising and finetuning step significantly improves quality and fine details while retaining the original shape and appearance properties.

Architecture

Architectural Overview: HyP-NeRF is trained and inferred in two steps. In the first step (top), our hypernetwork, M is trained to predict the parameters of a NeRF model, fncorresponding to object instance n. At this stage, the NeRF model acts as a set of differentiable layers to compute the volumetric rendering loss, using which M is trained on a set of N objects, thereby learning a prior Φ = {ΦS, ΦC} over the shape and color codes given by S and C respectively. In the second step (bottom), the quality of the predicted multiview consistent NeRF, fn is improved using a denoising network trained directly in the image space. To do this, fn is rendered from multiple known poses to a set of images that are improved to photorealistic quality. fn is then finetuned on these improved images. Importantly, since fn is only finetuned and not optimized from scratch, and thus fn retains the multiview consistency whilst improving in terms of texture and shape quality.

HyP-NeRF can store thousands of NeRFs maintaining comparable quality against InstantNGP!


Additional Results

Acknowledgements

This work was supported by NSF IIS #2143576, NSF CNS #2038897, an STTR Award from Traverse Inc., NSF CloudBank, and an AWS Cloud Credits award.

BibTeX

@inproceedings{NEURIPS2023_a0303731,
 author = {Sen, Bipasha and Singh, Gaurav and Agarwal, Aditya and Agaram, Rohith and Krishna, Madhava and Sridhar, Srinath},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {A. Oh and T. Neumann and A. Globerson and K. Saenko and M. Hardt and S. Levine},
 pages = {51050--51064},
 publisher = {Curran Associates, Inc.},
 title = {HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork},
 url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/a03037317560b8c5f2fb4b6466d4c439-Paper-Conference.pdf},
 volume = {36},
 year = {2023}
}