skip to main content

Caltech/UCLA Joint Analysis Seminar

Friday, May 17, 2019
5:00pm to 5:50pm
Add to Cal
Random Vector Functional Link Neural Networks as Universal Approximators
Palina Salanevich, Department of Mathematics, UCLA,
UCLA MS 6627

Single layer feedforward neural networks (SLFN) have been widely applied to solve problems such as classification and regression because of their universal approximation capability. At the same time, iterative methods usually used for training SLFN suffer from slow convergence, getting trapped in a local minimum and being sensitivity to the choice of parameters. Random Vector Functional Link Networks (RVFL) is a randomized version of SLFN. In RVFL, the weights from the input layer to hidden layer are selected at random from a suitable domain and kept fixed in the learning stage. This way, only output layers are optimized, which makes learning much easier and cheaper computationally. Igelnik and Pao proved that the RVFL network is a universal approximator for a continuous function on a bounded finite dimensional set. In this talk, we provide a non-asymptotic bound on the approximation error, depending on the number of nodes in the hidden layer, and discuss an extension of the Igelnik and Pao result to the case when data is assumed to lie on a lower dimensional manifold.

For more information, please contact Math Department by phone at 626-395-4335 or by email at [email protected].