Fast Error Estimation for Efficient Support Vector Machine Growing Articles
Overview
published in
- NEUROCOMPUTING Journal
publication date
- January 2010
start page
- 1018
end page
- 1023
issue
- 4-6
volume
- 73
Digital Object Identifier (DOI)
International Standard Serial Number (ISSN)
- 0925-2312
Electronic International Standard Serial Number (EISSN)
- 1872-8286
abstract
-
Support vector machines (SVMs) have become an off-the-shelf solution to solve many machine learning tasks but, unfortunately, the size of the resulting machines is quite often
exceedingly large, which hampers their use in those practical
applications demanding extremely fast response. Some methods exist to
prune the models after training, but a full SVM model needs to be
trained first, which usually represents a large computational cost.
Furthermore, the reduction algorithms are prone to fall in local minima
and also represent an additional non-negligible computational cost.
Alternative procedures based on incrementally growing a semiparametric
model provide a good compromise between complexity, machine size and
performance. We investigate here the potential benefits of a fast error
estimation (FEE) mechanism to improve the semiparametric SVM growing
process. Precisely, we propose to use the FEE method to identify the
best node to be added to the model in every growing step, by selecting
the candidate with the lowest cross-validation error. We evaluate the
proposed approach by evaluating the performance of the algorithm in
benchmarks with real world datasets from the UCI machine learning
repository.