Deep Neural Networks: Multi-Classification and Universal Approximation

Hernández Salinas M, Zuazua Iriondo E (2025)


Publication Language: English

Publication Status: Submitted

Publication Type: Unpublished / Preprint

Future Publication Type: Journal article

Publication year: 2025

URI: https://arxiv.org/abs/2409.06555v1

Open Access Link: https://arxiv.org/abs/2308.15257

Abstract

We demonstrate that a ReLU deep neural network with a width of 2 and a depth of 2N+4M−1 layers can achieve finite sample memorization for any dataset comprising N elements in Rd, where d≥1, and M classes, thereby ensuring accurate classification.
By modeling the neural network as a time-discrete nonlinear dynamical system, we interpret the memorization property as a problem of simultaneous or ensemble controllability. This problem is addressed by constructing the network parameters inductively and explicitly, bypassing the need for training or solving any optimization problem.
Additionally, we establish that such a network can achieve universal approximation in Lp(Ω;R+), where Ω is a bounded subset of Rd and p∈[1,∞), using a ReLU deep neural network with a width of d+1. We also provide depth estimates for approximating W1,p functions and width estimates for approximating Lp(Ω;Rm) for m≥1. Our proofs are constructive, offering explicit values for the biases and weights involved.

Authors with CRIS profile

Additional Organisation(s)

How to cite

APA:

Hernández Salinas, M., & Zuazua Iriondo, E. (2025). Deep Neural Networks: Multi-Classification and Universal Approximation. (Unpublished, Submitted).

MLA:

Hernández Salinas, Martin, and Enrique Zuazua Iriondo. Deep Neural Networks: Multi-Classification and Universal Approximation. Unpublished, Submitted. 2025.

BibTeX: Download