Published in

2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)

DOI: 10.1109/ijcnn.2004.1379967

Links

Tools

Export citation

Search in Google Scholar

A sparse least squares support vector machine classifier

Journal article published in 2004 by X.-M. Liu, J. Zhang, B. Kong, József Valyon, J.-B. Gao ORCID, G. Horvath
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In the last decade Support Vector Machines (SVM) - introduced by Vapnik - have been successfully applied to a large number of problems. Lately a new technique, the Least Squares SVM (LS-SVM) has been introduced, which addresses classification and regression problems by formulating a linear equation set. In comparison to the original SVM, which involves a quadratic programming task, LS-SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. The linear equation set of LS-SVM embodies all available information about the learning process. By applying modifications to this equation set, we present a Least Squares version of the Least Squares Support Vector Machine (LS2-SVM). The modifications simplify the formulations, speed up the calculations and provide better results, but most importantly it concludes a sparse solution.