Electronic International Standard Serial Number (EISSN)
1872-8286
abstract
Hyperparameter optimization (HPO) is a vital step in machine learning (ML) for enhancing model performance. However, the vast and complex nature of the search space makes HPO both challenging and resource-intensive. Automatic HPO methods have demonstrated their ability to efficiently explore high-dimensional hyperparameter spaces and identify optimal solutions, but training and evaluating the model for each set of hyperparameters is still computationally expensive. To further reduce the computational cost, we propose a novel strategy that wraps the HPO process and terminates it based on the sequence of hyperparameters evaluated. The algorithm is inspired by the classic secretary problem, with two additional variations to better adjust to the HPO process. We evaluated the algorithm using popular HPO samplers, including Random Search (RS), Tree-structured Parzen Estimator (TPE), Bayesian Optimization with Gaussian Processes (BOGP), Genetic Algorithms (GA), and Particle Swarm Optimization (PSO). Results indicate that the proposed algorithm accelerates the HPO process by an average of 34%, with only a minimal trade-off in solution quality of 8%. The algorithm is straightforward to implement, compatible with any HPO setup, and particularly effective in the early stages of optimization. This makes it a valuable tool for practitioners aiming to quickly identify promising hyperparameters or reducing the search space, significantly cutting down the time and computational resources required.