
Achieving the Optimal Approximation Rate of Nonlinear Shallow Neural Networks through Simple Linearization
Please login to view abstract download link
In this talk, I will report a new result that nonlinear shallow neural networks with ReLUk activation functions can achieve their optimal approximation rates even when simplified to linearized networks with fixed, preselected weights and biases. Moreover, we reveal that these linearized neural networks significantly outperform traditional finite element spaces composed of piecewise polynomials of degree k in terms of approximation efficiency.