Fast and accurate matrix completion via truncated nuclear norm regularization

Yao Hu, Debing Zhang, Jieping Ye, Xuelong Li, Xiaofei He

Research output: Contribution to journalArticle

235 Citations (Scopus)

Abstract

Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.

Original languageEnglish (US)
Article number6389682
Pages (from-to)2117-2130
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume35
Issue number9
DOIs
StatePublished - 2013

Fingerprint

Matrix Completion
Method of multipliers
Alternating Direction Method
Regularization
Norm
Line Search
Singular Values
Search Methods
Image Inpainting
Gradient
Matrix Approximation
Low-rank Approximation
Low-rank Matrices
Convex Relaxation
Recommender Systems
Recommender systems
Approximation Problem
Iterative Procedure
Empirical Study
Penalty

Keywords

  • accelerated proximal gradient method
  • alternating direction method of multipliers
  • Matrix completion
  • nuclear norm minimization

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Software
  • Computational Theory and Mathematics
  • Applied Mathematics

Cite this

Fast and accurate matrix completion via truncated nuclear norm regularization. / Hu, Yao; Zhang, Debing; Ye, Jieping; Li, Xuelong; He, Xiaofei.

In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35, No. 9, 6389682, 2013, p. 2117-2130.

Research output: Contribution to journalArticle

Hu, Yao ; Zhang, Debing ; Ye, Jieping ; Li, Xuelong ; He, Xiaofei. / Fast and accurate matrix completion via truncated nuclear norm regularization. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 2013 ; Vol. 35, No. 9. pp. 2117-2130.
@article{b1a80404fd22465fbd23deff9f0582d7,
title = "Fast and accurate matrix completion via truncated nuclear norm regularization",
abstract = "Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.",
keywords = "accelerated proximal gradient method, alternating direction method of multipliers, Matrix completion, nuclear norm minimization",
author = "Yao Hu and Debing Zhang and Jieping Ye and Xuelong Li and Xiaofei He",
year = "2013",
doi = "10.1109/TPAMI.2012.271",
language = "English (US)",
volume = "35",
pages = "2117--2130",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE Computer Society",
number = "9",

}

TY - JOUR

T1 - Fast and accurate matrix completion via truncated nuclear norm regularization

AU - Hu, Yao

AU - Zhang, Debing

AU - Ye, Jieping

AU - Li, Xuelong

AU - He, Xiaofei

PY - 2013

Y1 - 2013

N2 - Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.

AB - Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.

KW - accelerated proximal gradient method

KW - alternating direction method of multipliers

KW - Matrix completion

KW - nuclear norm minimization

UR - http://www.scopus.com/inward/record.url?scp=84880896128&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84880896128&partnerID=8YFLogxK

U2 - 10.1109/TPAMI.2012.271

DO - 10.1109/TPAMI.2012.271

M3 - Article

VL - 35

SP - 2117

EP - 2130

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 9

M1 - 6389682

ER -