One Step Strategy for Learning RBF Network Parameters

Mladen Široki


In this paper a new, one step strategy for learning Radial Basis Functions network parameters is proposed. In the RBF network model developed by Poggio and Girosi three modifiable sets of parameters: positions of the centers t, weighed norm ||x - t||w 2 and output layer weights c have to be determined during the learning stage. The authors suggest that these parameters be set by some iterative nonlinear optimization method, such as gradient descent, conjugate gradient or simulated annealing method. The basic idea of this work is: if hidden layer radial basis functions are set to be a multivariate Gaussian function, unknown parameters can be learned from the training set much faster, in a single step, by well known statistical methods, than by iterative optimization. In this approach the positions of the centers are learned by K-means clustering method, weighed norms are calculated as a Mahalanobis distances between x and t, and optimal output layer weights are found by pseudoinversion. Calculation of Mahalanobis distances involves estimation of hidden units covariance matrices sigma, that replace weighed matrices W. Two classification examples illustrate the usefulness of the method.


Neural Networks, Radial Basis Functions Networks, Learning, Classification

Full Text:


Creative Commons License
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

Crossref Similarity Check logo

Crossref logologo_doaj

 Hrvatski arhiv weba logo