Training SVM with indefinite kernels

Jianhui Chen, Jieping Ye

Research output: Chapter in Book/Report/Conference proceedingConference contribution

43 Citations (Scopus)

Abstract

Similarity matrices generated from many applications may not be positive semidefinite, and hence can't fit into the kernel machine framework. In this paper, we study the problem of training support vector machines with an indefinite kernel. We consider a regularized SVM formulation, in which the indefinite kernel matrix is treated as a noisy observation of some unknown positive semidefinite one (proxy kernel) and the support vectors and the proxy kernel can be computed simultaneously. We propose a semi-infinite quadratically constrained linear program formulation for the optimization, which can be solved iteratively to find a global optimum solution. We further propose to employ an additional pruning strategy, which significantly improves the efficiency of the algorithm, while retaining the convergence property of the algorithm. In addition, we show the close relationship between the proposed formulation and multiple kernel learning. Experiments on a collection of benchmark data sets demonstrate the efficiency and effectiveness of the proposed algorithm.

Original languageEnglish (US)
Title of host publicationProceedings of the 25th International Conference on Machine Learning
Pages136-143
Number of pages8
StatePublished - 2008
Event25th International Conference on Machine Learning - Helsinki, Finland
Duration: Jul 5 2008Jul 9 2008

Other

Other25th International Conference on Machine Learning
CountryFinland
CityHelsinki
Period7/5/087/9/08

Fingerprint

Support vector machines
Experiments

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Software

Cite this

Chen, J., & Ye, J. (2008). Training SVM with indefinite kernels. In Proceedings of the 25th International Conference on Machine Learning (pp. 136-143)

Training SVM with indefinite kernels. / Chen, Jianhui; Ye, Jieping.

Proceedings of the 25th International Conference on Machine Learning. 2008. p. 136-143.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Chen, J & Ye, J 2008, Training SVM with indefinite kernels. in Proceedings of the 25th International Conference on Machine Learning. pp. 136-143, 25th International Conference on Machine Learning, Helsinki, Finland, 7/5/08.
Chen J, Ye J. Training SVM with indefinite kernels. In Proceedings of the 25th International Conference on Machine Learning. 2008. p. 136-143
Chen, Jianhui ; Ye, Jieping. / Training SVM with indefinite kernels. Proceedings of the 25th International Conference on Machine Learning. 2008. pp. 136-143
@inproceedings{5a99ad0e490e4124964a7455232930c2,
title = "Training SVM with indefinite kernels",
abstract = "Similarity matrices generated from many applications may not be positive semidefinite, and hence can't fit into the kernel machine framework. In this paper, we study the problem of training support vector machines with an indefinite kernel. We consider a regularized SVM formulation, in which the indefinite kernel matrix is treated as a noisy observation of some unknown positive semidefinite one (proxy kernel) and the support vectors and the proxy kernel can be computed simultaneously. We propose a semi-infinite quadratically constrained linear program formulation for the optimization, which can be solved iteratively to find a global optimum solution. We further propose to employ an additional pruning strategy, which significantly improves the efficiency of the algorithm, while retaining the convergence property of the algorithm. In addition, we show the close relationship between the proposed formulation and multiple kernel learning. Experiments on a collection of benchmark data sets demonstrate the efficiency and effectiveness of the proposed algorithm.",
author = "Jianhui Chen and Jieping Ye",
year = "2008",
language = "English (US)",
isbn = "9781605582054",
pages = "136--143",
booktitle = "Proceedings of the 25th International Conference on Machine Learning",

}

TY - GEN

T1 - Training SVM with indefinite kernels

AU - Chen, Jianhui

AU - Ye, Jieping

PY - 2008

Y1 - 2008

N2 - Similarity matrices generated from many applications may not be positive semidefinite, and hence can't fit into the kernel machine framework. In this paper, we study the problem of training support vector machines with an indefinite kernel. We consider a regularized SVM formulation, in which the indefinite kernel matrix is treated as a noisy observation of some unknown positive semidefinite one (proxy kernel) and the support vectors and the proxy kernel can be computed simultaneously. We propose a semi-infinite quadratically constrained linear program formulation for the optimization, which can be solved iteratively to find a global optimum solution. We further propose to employ an additional pruning strategy, which significantly improves the efficiency of the algorithm, while retaining the convergence property of the algorithm. In addition, we show the close relationship between the proposed formulation and multiple kernel learning. Experiments on a collection of benchmark data sets demonstrate the efficiency and effectiveness of the proposed algorithm.

AB - Similarity matrices generated from many applications may not be positive semidefinite, and hence can't fit into the kernel machine framework. In this paper, we study the problem of training support vector machines with an indefinite kernel. We consider a regularized SVM formulation, in which the indefinite kernel matrix is treated as a noisy observation of some unknown positive semidefinite one (proxy kernel) and the support vectors and the proxy kernel can be computed simultaneously. We propose a semi-infinite quadratically constrained linear program formulation for the optimization, which can be solved iteratively to find a global optimum solution. We further propose to employ an additional pruning strategy, which significantly improves the efficiency of the algorithm, while retaining the convergence property of the algorithm. In addition, we show the close relationship between the proposed formulation and multiple kernel learning. Experiments on a collection of benchmark data sets demonstrate the efficiency and effectiveness of the proposed algorithm.

UR - http://www.scopus.com/inward/record.url?scp=56449083666&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=56449083666&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:56449083666

SN - 9781605582054

SP - 136

EP - 143

BT - Proceedings of the 25th International Conference on Machine Learning

ER -