HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion

TUM1, Apple2

HyperDiffusion generates 3D and 4D shapes with a unified diffusion model.

Abstract

Implicit neural fields, typically encoded by a multilayer perceptron (MLP) that maps from coordinates (e.g., xyz) to signals (e.g., signed distances), have shown remarkable promise as a high-fidelity and compact representation. However, the lack of a regular and explicit grid structure also makes it challenging to apply generative modeling directly on implicit neural fields in order to synthesize new data.

We propose HyperDiffusion, a novel approach for unconditional generative modeling of implicit neural fields. HyperDiffusion operates directly on MLP weights and generates new neural implicit fields encoded by synthesized MLP parameters. Specifically, a collection of MLPs is first optimized to faithfully represent individual data samples. Subsequently, a diffusion process is trained in this MLP weight space to model the underlying distribution of neural implicit fields.

HyperDiffusion enables diffusion modeling over a implicit, compact, and yet high-fidelity representation of complex signals across 3D shapes and 4D mesh animations within one single unified framework.

Video

3D Generations

4D Generations

BibTeX

@misc{erkoç2023hyperdiffusion,
  title={HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion}, 
  author={Ziya Erkoç and Fangchang Ma and Qi Shan and Matthias Nießner and Angela Dai},
  year={2023},
  eprint={2303.17015},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}