Supervised data transformation and dimensionality reduction with a 3-layer multi-layer perceptron for classification problems Articles uri icon

publication date

  • January 2021

International Standard Serial Number (ISSN)

  • 1868-5137

Electronic International Standard Serial Number (EISSN)

  • 1868-5145

abstract

  • The aim of data transformation is to transform the original feature space of data into another space with better properties. This is typically combined with dimensionality reduction, so that the dimensionality of the transformed space is smaller. A widely used method for data transformation and dimensionality reduction is Principal Component Analysis (PCA). PCA finds a subspace that explains most of the data variance. While the new PCA feature space has interesting properties, such as removing linear correlation, PCA is an unsupervised method. Therefore, there is no guarantee that the PCA feature space will be the most appropriate for supervised tasks, such as classification or regression. On the other hand, 3-layer Multi Layer Perceptrons (MLP), which are supervised methods, can also be understood as a data transformation carried out by the hidden layer, followed by a classification/regression operation performed by the output layer. Given that the hidden layer is obtained after a supervised training process, it can be considered that it is performing a supervised data transformation. And if the number of hidden neurons is smaller than the input, also dimensionality reduction. Despite this kind of transformation being widely available (any neural network package that allows access to the hidden layer weights can be used), no extensive experimentation on the quality of 3-layer MLP data transformation has been carried out. The aim of this article is to carry out this research for classification problems. Results show that, overall, this transformation offers better results than the PCA unsupervised transformation method.