Towards a mathematical framework to inform neural network modelling via polynomial regression Articles uri icon

publication date

  • October 2021

start page

  • 57

end page

  • 72

volume

  • 142

International Standard Serial Number (ISSN)

  • 0893-6080

Electronic International Standard Serial Number (EISSN)

  • 1879-2782

abstract

  • Even when neural networks are widely used in a large number of applications, they are still considered as black boxes and present some difficulties for dimensioning or evaluating their prediction error. This has led to an increasing interest in the overlapping area between neural networks and more traditional statistical methods, which can help overcome those problems. In this article, a mathematical framework relating neural networks and polynomial regression is explored by building an explicit expression for the coefficients of a polynomial regression from the weights of a given neural network, using a Taylor expansion approach. This is achieved for single hidden layer neural networks in regression problems. The validity of the proposed method depends on different factors like the distribution of the synaptic potentials or the chosen activation function. The performance of this method is empirically tested via simulation of synthetic data generated from polynomials to train neural networks with different structures and hyperparameters, showing that almost identical predictions can be obtained when certain conditions are met. Lastly, when learning from polynomial generated data, the proposed method produces polynomials that approximate correctly the data locally.

subjects

  • Statistics

keywords

  • machine learning; neural networks; polynomial regression