Learning Invariant Riemannian Geometric Representations Using Deep Nets

Suhas Lohit, Pavan Turaga

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Non-Euclidean constraints are inherent in many kinds of data in computer vision and machine learning, typically as a result of specific invariance requirements that need to be respected during high-level inference. Often, these geometric constraints can be expressed in the language of Riemannian geometry, where conventional vector space machine learning does not apply directly. The central question this paper deals with is: How does one train deep neural nets whose final outputs are elements on a Riemannian manifold? To answer this, we propose a general framework for manifold-aware training of deep neural networks - we utilize tangent spaces and exponential maps in order to convert the proposed problem into a form that allows us to bring current advances in deep learning to bear upon this problem. We describe two specific applications to demonstrate this approach: prediction of probability distributions for multi-class image classification, and prediction of illumination-invariant subspaces from a single face-image via regression on the Grassmannian. These applications show the generality of the proposed framework, and result in improved performance over baselines that ignore the geometry of the output space. In addition to solving this specific problem, we believe this paper opens new lines of enquiry centered on the implications of Riemannian geometry on deep architectures.

Original languageEnglish (US)
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1329-1338
Number of pages10
Volume2018-January
ISBN (Electronic)9781538610343
DOIs
StatePublished - Jan 19 2018
Event16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017 - Venice, Italy
Duration: Oct 22 2017Oct 29 2017

Other

Other16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017
CountryItaly
CityVenice
Period10/22/1710/29/17

Fingerprint

Geometry
Learning systems
Image classification
Vector spaces
Invariance
Probability distributions
Computer vision
Lighting
Neural networks
Deep neural networks
Deep learning

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Cite this

Lohit, S., & Turaga, P. (2018). Learning Invariant Riemannian Geometric Representations Using Deep Nets. In Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017 (Vol. 2018-January, pp. 1329-1338). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICCVW.2017.158

Learning Invariant Riemannian Geometric Representations Using Deep Nets. / Lohit, Suhas; Turaga, Pavan.

Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. p. 1329-1338.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lohit, S & Turaga, P 2018, Learning Invariant Riemannian Geometric Representations Using Deep Nets. in Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. vol. 2018-January, Institute of Electrical and Electronics Engineers Inc., pp. 1329-1338, 16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017, Venice, Italy, 10/22/17. https://doi.org/10.1109/ICCVW.2017.158
Lohit S, Turaga P. Learning Invariant Riemannian Geometric Representations Using Deep Nets. In Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Vol. 2018-January. Institute of Electrical and Electronics Engineers Inc. 2018. p. 1329-1338 https://doi.org/10.1109/ICCVW.2017.158
Lohit, Suhas ; Turaga, Pavan. / Learning Invariant Riemannian Geometric Representations Using Deep Nets. Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. pp. 1329-1338
@inproceedings{37272a4ebe0c4d96908b89b481385a1d,
title = "Learning Invariant Riemannian Geometric Representations Using Deep Nets",
abstract = "Non-Euclidean constraints are inherent in many kinds of data in computer vision and machine learning, typically as a result of specific invariance requirements that need to be respected during high-level inference. Often, these geometric constraints can be expressed in the language of Riemannian geometry, where conventional vector space machine learning does not apply directly. The central question this paper deals with is: How does one train deep neural nets whose final outputs are elements on a Riemannian manifold? To answer this, we propose a general framework for manifold-aware training of deep neural networks - we utilize tangent spaces and exponential maps in order to convert the proposed problem into a form that allows us to bring current advances in deep learning to bear upon this problem. We describe two specific applications to demonstrate this approach: prediction of probability distributions for multi-class image classification, and prediction of illumination-invariant subspaces from a single face-image via regression on the Grassmannian. These applications show the generality of the proposed framework, and result in improved performance over baselines that ignore the geometry of the output space. In addition to solving this specific problem, we believe this paper opens new lines of enquiry centered on the implications of Riemannian geometry on deep architectures.",
author = "Suhas Lohit and Pavan Turaga",
year = "2018",
month = "1",
day = "19",
doi = "10.1109/ICCVW.2017.158",
language = "English (US)",
volume = "2018-January",
pages = "1329--1338",
booktitle = "Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Learning Invariant Riemannian Geometric Representations Using Deep Nets

AU - Lohit, Suhas

AU - Turaga, Pavan

PY - 2018/1/19

Y1 - 2018/1/19

N2 - Non-Euclidean constraints are inherent in many kinds of data in computer vision and machine learning, typically as a result of specific invariance requirements that need to be respected during high-level inference. Often, these geometric constraints can be expressed in the language of Riemannian geometry, where conventional vector space machine learning does not apply directly. The central question this paper deals with is: How does one train deep neural nets whose final outputs are elements on a Riemannian manifold? To answer this, we propose a general framework for manifold-aware training of deep neural networks - we utilize tangent spaces and exponential maps in order to convert the proposed problem into a form that allows us to bring current advances in deep learning to bear upon this problem. We describe two specific applications to demonstrate this approach: prediction of probability distributions for multi-class image classification, and prediction of illumination-invariant subspaces from a single face-image via regression on the Grassmannian. These applications show the generality of the proposed framework, and result in improved performance over baselines that ignore the geometry of the output space. In addition to solving this specific problem, we believe this paper opens new lines of enquiry centered on the implications of Riemannian geometry on deep architectures.

AB - Non-Euclidean constraints are inherent in many kinds of data in computer vision and machine learning, typically as a result of specific invariance requirements that need to be respected during high-level inference. Often, these geometric constraints can be expressed in the language of Riemannian geometry, where conventional vector space machine learning does not apply directly. The central question this paper deals with is: How does one train deep neural nets whose final outputs are elements on a Riemannian manifold? To answer this, we propose a general framework for manifold-aware training of deep neural networks - we utilize tangent spaces and exponential maps in order to convert the proposed problem into a form that allows us to bring current advances in deep learning to bear upon this problem. We describe two specific applications to demonstrate this approach: prediction of probability distributions for multi-class image classification, and prediction of illumination-invariant subspaces from a single face-image via regression on the Grassmannian. These applications show the generality of the proposed framework, and result in improved performance over baselines that ignore the geometry of the output space. In addition to solving this specific problem, we believe this paper opens new lines of enquiry centered on the implications of Riemannian geometry on deep architectures.

UR - http://www.scopus.com/inward/record.url?scp=85046303226&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85046303226&partnerID=8YFLogxK

U2 - 10.1109/ICCVW.2017.158

DO - 10.1109/ICCVW.2017.158

M3 - Conference contribution

VL - 2018-January

SP - 1329

EP - 1338

BT - Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -