Training neural network classifiers through Bayes risk minimization applying unidimensional Parzen windows Articles uri icon

publication date

  • May 2018

start page

  • 204

end page

  • 215

volume

  • 77

International Standard Serial Number (ISSN)

  • 0031-3203

Electronic International Standard Serial Number (EISSN)

  • 1873-5142

abstract

  • A new training algorithm for neural networks in binary classification problems is presented. It is based on the minimization of an estimate of the Bayes risk by using Parzen windows applied to the final one-dimensional nonlinear transformation of the samples to estimate the probability of classification error. This leads to a very general approach to error minimization and training, where the risk that is to be minimized is defined in terms of integrated one-dimensional Parzen windows, and the gradient descent algorithm used to minimize this risk is a function of the window that is used. By relaxing the constraints that are typically applied to Parzen windows when used for probability density function estimation, for example by allowing them to be non-symmetric or possibly infinite in duration, an entirely new set of training algorithms emerge. In particular, different Parzen windows lead to different cost functions, and some interesting relationships with classical training methods are discovered. Experiments with synthetic and real benchmark datasets show that with the appropriate choice of window, fitted to the specific problem, it is possible to improve the performance of neural network classifiers over those that are trained using classical methods.

subjects

  • Telecommunications

keywords

  • bayes risk; parzen windows; binary classification