Generative Modeling in Function Spaces
A Theoretical and Algorithmic Review
Abstract
Generative AI has made remarkable progress with GANs, VAEs, and diffusion models, but almost all of these methods live in the world of finite-dimensional vectors. Many real problems, however, are best described as continuous functions: signals, images, shapes, or even physical fields. Forcing them onto grids like pixels or voxels hides their true structure. This series explores a new frontier—generative modeling in function spaces. We look at the mathematical ideas, the algorithms extending diffusion and flow models, and the possibilities they unlock for science and beyond.
References
- Neural Operator: Learning Maps Between Function Spaces (JMLR 2021)
- Generative Adversarial Neural Operators (TMLR 2022)
- Diffusion Probabilistic Fields (ICLR 2023)
- Score-based Generative Modeling through Stochastic Evolution Equations in Hilbert Spaces (NeurIPS 2023)
- Manifold Diffusion Fields (ICLR 2024)
- Functional Flow Matching (Oral, AISTATS 2024)
- Infnite-Dimensional Diffusion Models (JMLR 2024)
- Functional Diffusion (CVPR 2024)
- Score-based Diffusion Models in Function Space (JMLR 2025)
- FunDiff: Diffusion Models over Function Spaces for Physics-Informed Generative Modeling (2025)
- Multitask Learning with Stochastic Interpolants (2025)
Code
- neuraloperatoris a comprehensive library for learning neural operators in PyTorch. It is the official implementation for Fourier Neural Operators and Tensorized Neural Operators.