Universal Approximation Theorems


Abstract

A neural network is a network of artificial neurons arranged in layers. From a mathematical standpoint, neurons are made of compositions of a nonlinear function and a linear function. Universal approximation theorems are theorems associated with the approximation capabilities of such neural networks. In general, universal approximation theorems imply that neural networks with appropriate parameters can approximate any continuous funcitons. This study reviews universal approximation theorems starting from Taylor's theorem, Fourier's theorem, Weierstrass approximation theorem, Kolmogorov-Arnold representation theorem, and more. Finally, we discuss the implications of universal approximation theorems from both arbitrary width and depth perspectives.



References