TY - GEN
T1 - A least squares formulation for a class of generalized eigenvalue problems in machine learning
AU - Sun, Liang
AU - Ji, Shuiwang
AU - Ye, Jieping
PY - 2009
Y1 - 2009
N2 - Many machine learning algorithms can be formulated as a generalized eigenvalue problem. One major limitation of such formulation is that the generalized eigenvalue problem is computationally expensive to solve especially for large-scale problems. In this paper, we show that under a mild condition, a class of generalized eigenvalue problems in machine learning can be formulated as a least squares problem. This class of problems include classical techniques such as Canonical Correlation Analysis (CCA), Partial Least Squares (PLS), and Linear Discriminant Analysis (LDA), as well as Hypergraph Spectral Learning (HSL). As a result, various regularization techniques can be readily incorporated into the formulation to improve model sparsity and generalization ability. In addition, the least squares formulation leads to efficient and scalable implementations based on the iterative conjugate gradient type algorithms. We report experimental results that confirm the established equivalence relationship. Results also demonstrate the efficiency and effectiveness of the equivalent least squares formulations on large-scale problems.
AB - Many machine learning algorithms can be formulated as a generalized eigenvalue problem. One major limitation of such formulation is that the generalized eigenvalue problem is computationally expensive to solve especially for large-scale problems. In this paper, we show that under a mild condition, a class of generalized eigenvalue problems in machine learning can be formulated as a least squares problem. This class of problems include classical techniques such as Canonical Correlation Analysis (CCA), Partial Least Squares (PLS), and Linear Discriminant Analysis (LDA), as well as Hypergraph Spectral Learning (HSL). As a result, various regularization techniques can be readily incorporated into the formulation to improve model sparsity and generalization ability. In addition, the least squares formulation leads to efficient and scalable implementations based on the iterative conjugate gradient type algorithms. We report experimental results that confirm the established equivalence relationship. Results also demonstrate the efficiency and effectiveness of the equivalent least squares formulations on large-scale problems.
UR - http://www.scopus.com/inward/record.url?scp=70049097964&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=70049097964&partnerID=8YFLogxK
U2 - 10.1145/1553374.1553499
DO - 10.1145/1553374.1553499
M3 - Conference contribution
AN - SCOPUS:70049097964
SN - 9781605585161
T3 - ACM International Conference Proceeding Series
BT - Proceedings of the 26th Annual International Conference on Machine Learning, ICML'09
T2 - 26th Annual International Conference on Machine Learning, ICML'09
Y2 - 14 June 2009 through 18 June 2009
ER -