Dissemin is shutting down on January 1st, 2025

Published in

The 2010 International Joint Conference on Neural Networks (IJCNN)

DOI: 10.1109/ijcnn.2010.5596915

Links

Tools

Export citation

Search in Google Scholar

Ensemble particle swarm model selection

Proceedings article published in 2010 by Hugo Jair Escalante ORCID, Manuel Montes, Enrique Sucar
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

This paper elaborates on the benefits of using particle swarm model selection (PSMS) for building effective ensemble classification models. PSMS searches in a toolbox for the best combination of methods for preprocessing, feature selection and classification for generic binary classification tasks. Throughout the search process PSMS evaluates a wide variety of models, from which a single solution (i.e. the best classification model) is selected. Satisfactory results have been reported with the latter formulation in several domains. However, many models that are potentially useful for classification are disregarded for the final model. In this paper we propose to re-use such candidate models for building effective ensemble classifiers. We explore three simple formulations for building ensembles from intermediate PSMS solutions that do not require of further computation than that of the traditional PSMS implementation. We report experimental results on benchmark data as well as on a data set from object recognition. Our results show that better models can be obtained with the ensemble version of PSMS, motivating further research on the combination of candidate PSMS models. Additionally, we analyze the diversity of the classification models, which is known to be an important factor for the construction of ensembles.