Abstract

Autoassociative neural networks (ANNs) have been proposed as a nonlinear extension of principal component analysis (PCA), which is commonly used to identify linear variation patterns in high-dimensional data. While principal component scores represent uncorrelated features, standard backpropagation methods for training ANNs provide no guarantee of producing distinct features, which is important for interpretability and for discovering the nature of the variation patterns in the data. Here, we present an alternating nonlinear PCA method, which encourages learning of distinct features in ANNs. A new measure motivated by the condition of orthogonal loadings in PCA is proposed for measuring the extent to which the nonlinear principal components represent distinct variation patterns. We demonstrate the effectiveness of our method using a simulated point cloud data set as well as a subset of the MNIST handwritten digits data. The results show that standard ANNs consistently mix the true variation sources in the low-dimensional representation learned by the model, whereas our alternating method produces solutions where the patterns are better separated in the low-dimensional space.

Original languageEnglish (US)
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
StateAccepted/In press - Oct 26 2016

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Distinct Variation Pattern Discovery Using Alternating Nonlinear Principal Component Analysis'. Together they form a unique fingerprint.

Cite this