Hypernetworks for Generalizable BRDF Representation

ECCV 2024

University of Cambridge

The hypernetwork aims to provide accurate reconstructions of unseen BRDFs, from sparse and unstructured real-world reflectance measurements.

Abstract

In this paper, we introduce a technique to estimate measured BRDFs from a sparse set of samples. Our approach offers accurate BRDF reconstructions that are generalizable to new materials. This opens the door to BDRF reconstructions from a variety of data sources. The success of our approach relies on the ability of hypernetworks to generate a robust representation of BRDFs and a set encoder that allows us to feed inputs of different sizes to the architecture. The set encoder and the hypernetwork also enable the compression of densely sampled BRDFs. We evaluate our technique both qualitatively and quantitatively on the well-known MERL dataset of 100 isotropic materials. Our approach accurately 1) estimates the BRDFs of unseen materials even for an extremely sparse sampling, 2) compresses the measured BRDFs into very small embeddings, e.g., 7D.

  • Generalizability and Adaptability: Our methodology ensures robustness and adaptability across varying sample sizes and appearances, making it highly effective for estimating the BRDFs of unseen materials from highly sparse and unstructured sampling. Its adaptability also extends to highly compact representation of BRDFs, overall outperforming the prior state-of-the-art compression method.
  • Realism and Accuracy: Our extensive evaluation demonstrates the superior performance of our approach in reconstructing the BRDFs of 20 test materials from a limited number of samples, ranging from 40 to 4000, outperforming prior methods in terms of appearance modeling and color preservation (by at least 2dB in peak-signal-to-noise ratio and 1 in Delta E).

Scene renderings with our reconstructed materials, including sparse representation, compression, and interpolation:

BibTeX

@article{gokbudak2023hypernetworks,
  title      = {Hypernetworks for Generalizable BRDF Representation},
  author     = {Fazilet Gokbudak and Alejandro Sztrajman and Chenliang Zhou and Fangcheng Zhong and Rafal Mantiuk and Cengiz Oztireli},
  journal    = {The 18th European Conference on Computer Vision (ECCV)},
  year       = {2024}
}