Generating ensembles of heterogeneous classifiers using Stacked Generalization Articles
Overview
published in
publication date
- February 2015
start page
- 21
end page
- 34
issue
- 1
volume
- 5
Digital Object Identifier (DOI)
full text
International Standard Serial Number (ISSN)
- 1942-4787
Electronic International Standard Serial Number (EISSN)
- 1942-4795
abstract
- Over the last two decades, the machine learning and related communities have conducted numerous studies to improve the performance of a single classifier by combining several classifiers generated from one or more learning algorithms. Bagging and Boosting are the most representative examples of algorithms for generating homogeneous ensembles of classifiers. However, Stacking has become a commonly used technique for generating ensembles of heterogeneous classifiers since Wolpert presented his study entitled Stacked Generalization in 1992. Studies that have addressed the Stacking issue demonstrated that when selecting base learning algorithms for generating classifiers that are members of the ensemble, their learning parameters and the learning algorithm for generating the meta-classifier were critical issues. Most studies on this topic manually select the appropriate combination of base learning algorithms and their learning parameters. However, some other methods use automatic methods to determine good Stacking configurations instead of starting from these strong initial assumptions. In this paper, we describe Stacking and its variants and present several examples of application domains. WIREs Data Mining Knowl Discov 2015, 5:21-34. doi: 10.1002/widm.1143 For further resources related to this article, please visit the . Conflict of interest: The authors have declared no conflicts of interest for this article.
Classification
subjects
- Computer Science
- Statistics
keywords
- combining classifiers; feature-selection; decision trees; classification; prediction; algorithm; accuracy