Published in

2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence)

DOI: 10.1109/ijcnn.2008.4634114

Links

Tools

Export citation

Search in Google Scholar

Researching on Combining Boosting Ensembles

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

As shown in the bibliography, training an ensemble of networks is an interesting way to improve the performance with respect to a single network. The two key factors to design an ensemble are how to train the individual networks and how to combine them to give a single output. Boosting is a well known methodology to build an ensemble. Some boosting methods use an specific combiner (Boosting Combiner) based on the accuracy of the network. Although the Boosting combiner provides good results on boosting ensembles, the simple combiner Output Average worked better in three new boosting methods we successfully proposed in previouses papers. In this paper, we study the performance of sixteen different combination methods for ensembles previously trained with Adaptive Boosting and Average Boosting in order to see which combiner fits better on these ensembles. Finally, the results show that the accuracy of the ensembles trained with these original boosting methods can be improved by using the appropriate alternative combiner. In fact, the Output average and the Weighted average on low/medium sized ensembles provide the best results in most of the cases.