Learning Stable Multilevel Dictionaries for Sparse Representations

Jayaraman J. Thiagarajan, Karthikeyan Natesan Ramamurthy, Andreas Spanias

Research output: Contribution to journalArticle

21 Citations (Scopus)

Abstract

Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The increasing need for learning sparse models in large-scale applications motivates the development of efficient, robust, and provably good dictionary learning algorithms. Algorithmic stability and generalizability are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries, which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K -hyperline clustering, to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.

Original languageEnglish (US)
Article number6926841
Pages (from-to)1913-1926
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume26
Issue number9
DOIs
StatePublished - Sep 1 2015

Fingerprint

Glossaries
Learning algorithms
Learning systems
Recovery
Atoms

Keywords

  • Compressed sensing
  • dictionary learning
  • generalization
  • sparse representations
  • stability

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Science Applications
  • Software

Cite this

Learning Stable Multilevel Dictionaries for Sparse Representations. / Thiagarajan, Jayaraman J.; Natesan Ramamurthy, Karthikeyan; Spanias, Andreas.

In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 26, No. 9, 6926841, 01.09.2015, p. 1913-1926.

Research output: Contribution to journalArticle

Thiagarajan, Jayaraman J. ; Natesan Ramamurthy, Karthikeyan ; Spanias, Andreas. / Learning Stable Multilevel Dictionaries for Sparse Representations. In: IEEE Transactions on Neural Networks and Learning Systems. 2015 ; Vol. 26, No. 9. pp. 1913-1926.
@article{598afb85fc054ec6b5140deb4cbc6281,
title = "Learning Stable Multilevel Dictionaries for Sparse Representations",
abstract = "Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The increasing need for learning sparse models in large-scale applications motivates the development of efficient, robust, and provably good dictionary learning algorithms. Algorithmic stability and generalizability are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries, which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K -hyperline clustering, to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.",
keywords = "Compressed sensing, dictionary learning, generalization, sparse representations, stability",
author = "Thiagarajan, {Jayaraman J.} and {Natesan Ramamurthy}, Karthikeyan and Andreas Spanias",
year = "2015",
month = "9",
day = "1",
doi = "10.1109/TNNLS.2014.2361052",
language = "English (US)",
volume = "26",
pages = "1913--1926",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "9",

}

TY - JOUR

T1 - Learning Stable Multilevel Dictionaries for Sparse Representations

AU - Thiagarajan, Jayaraman J.

AU - Natesan Ramamurthy, Karthikeyan

AU - Spanias, Andreas

PY - 2015/9/1

Y1 - 2015/9/1

N2 - Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The increasing need for learning sparse models in large-scale applications motivates the development of efficient, robust, and provably good dictionary learning algorithms. Algorithmic stability and generalizability are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries, which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K -hyperline clustering, to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.

AB - Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The increasing need for learning sparse models in large-scale applications motivates the development of efficient, robust, and provably good dictionary learning algorithms. Algorithmic stability and generalizability are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries, which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K -hyperline clustering, to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.

KW - Compressed sensing

KW - dictionary learning

KW - generalization

KW - sparse representations

KW - stability

UR - http://www.scopus.com/inward/record.url?scp=84940183596&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84940183596&partnerID=8YFLogxK

U2 - 10.1109/TNNLS.2014.2361052

DO - 10.1109/TNNLS.2014.2361052

M3 - Article

C2 - 25343771

AN - SCOPUS:84940183596

VL - 26

SP - 1913

EP - 1926

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 9

M1 - 6926841

ER -