From videos to verbs

Mining videos for activities using a cascade of dynamical systems

Pavan Turaga, Ashok Veeraraghavan, Rama Chellappa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

29 Citations (Scopus)

Abstract

Clustering video sequences in order to infer and extract activities from a single video stream is an extremely important problem and has significant potential in video indexing, surveillance, activity discovery and event recognition. Clustering a video sequence into activities requires one to simultaneously recognize activity boundaries (activity consistent subsequences) and cluster these activity subsequences. In order to do this, we build a generative model for activities (in video) using a cascade of dynamical systems and show that this model is able to capture and represent a diverse class of activities. We then derive algorithms to learn the model parameters from a video stream and also show how a single video sequence may be clustered into different clusters where each cluster represents an activity. We also propose a novel technique to build affine, view, rate invariance of the activity into the distance metric for clustering. Experiments show that the clusters found by the algorithm correspond to semantically meaningful activities.

Original languageEnglish (US)
Title of host publicationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
StatePublished - 2007
Externally publishedYes
Event2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07 - Minneapolis, MN, United States
Duration: Jun 17 2007Jun 22 2007

Other

Other2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
CountryUnited States
CityMinneapolis, MN
Period6/17/076/22/07

Fingerprint

Dynamical systems
Invariance
Experiments

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Vision and Pattern Recognition
  • Software
  • Control and Systems Engineering

Cite this

Turaga, P., Veeraraghavan, A., & Chellappa, R. (2007). From videos to verbs: Mining videos for activities using a cascade of dynamical systems. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [4270195] https://doi.org/10.1109/CVPR.2007.383170

From videos to verbs : Mining videos for activities using a cascade of dynamical systems. / Turaga, Pavan; Veeraraghavan, Ashok; Chellappa, Rama.

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2007. 4270195.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Turaga, P, Veeraraghavan, A & Chellappa, R 2007, From videos to verbs: Mining videos for activities using a cascade of dynamical systems. in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition., 4270195, 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07, Minneapolis, MN, United States, 6/17/07. https://doi.org/10.1109/CVPR.2007.383170
Turaga P, Veeraraghavan A, Chellappa R. From videos to verbs: Mining videos for activities using a cascade of dynamical systems. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2007. 4270195 https://doi.org/10.1109/CVPR.2007.383170
Turaga, Pavan ; Veeraraghavan, Ashok ; Chellappa, Rama. / From videos to verbs : Mining videos for activities using a cascade of dynamical systems. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2007.
@inproceedings{34ad91356a98430a8c5a01cb0601d62b,
title = "From videos to verbs: Mining videos for activities using a cascade of dynamical systems",
abstract = "Clustering video sequences in order to infer and extract activities from a single video stream is an extremely important problem and has significant potential in video indexing, surveillance, activity discovery and event recognition. Clustering a video sequence into activities requires one to simultaneously recognize activity boundaries (activity consistent subsequences) and cluster these activity subsequences. In order to do this, we build a generative model for activities (in video) using a cascade of dynamical systems and show that this model is able to capture and represent a diverse class of activities. We then derive algorithms to learn the model parameters from a video stream and also show how a single video sequence may be clustered into different clusters where each cluster represents an activity. We also propose a novel technique to build affine, view, rate invariance of the activity into the distance metric for clustering. Experiments show that the clusters found by the algorithm correspond to semantically meaningful activities.",
author = "Pavan Turaga and Ashok Veeraraghavan and Rama Chellappa",
year = "2007",
doi = "10.1109/CVPR.2007.383170",
language = "English (US)",
isbn = "1424411807",
booktitle = "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",

}

TY - GEN

T1 - From videos to verbs

T2 - Mining videos for activities using a cascade of dynamical systems

AU - Turaga, Pavan

AU - Veeraraghavan, Ashok

AU - Chellappa, Rama

PY - 2007

Y1 - 2007

N2 - Clustering video sequences in order to infer and extract activities from a single video stream is an extremely important problem and has significant potential in video indexing, surveillance, activity discovery and event recognition. Clustering a video sequence into activities requires one to simultaneously recognize activity boundaries (activity consistent subsequences) and cluster these activity subsequences. In order to do this, we build a generative model for activities (in video) using a cascade of dynamical systems and show that this model is able to capture and represent a diverse class of activities. We then derive algorithms to learn the model parameters from a video stream and also show how a single video sequence may be clustered into different clusters where each cluster represents an activity. We also propose a novel technique to build affine, view, rate invariance of the activity into the distance metric for clustering. Experiments show that the clusters found by the algorithm correspond to semantically meaningful activities.

AB - Clustering video sequences in order to infer and extract activities from a single video stream is an extremely important problem and has significant potential in video indexing, surveillance, activity discovery and event recognition. Clustering a video sequence into activities requires one to simultaneously recognize activity boundaries (activity consistent subsequences) and cluster these activity subsequences. In order to do this, we build a generative model for activities (in video) using a cascade of dynamical systems and show that this model is able to capture and represent a diverse class of activities. We then derive algorithms to learn the model parameters from a video stream and also show how a single video sequence may be clustered into different clusters where each cluster represents an activity. We also propose a novel technique to build affine, view, rate invariance of the activity into the distance metric for clustering. Experiments show that the clusters found by the algorithm correspond to semantically meaningful activities.

UR - http://www.scopus.com/inward/record.url?scp=34948834193&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=34948834193&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2007.383170

DO - 10.1109/CVPR.2007.383170

M3 - Conference contribution

SN - 1424411807

SN - 9781424411801

BT - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

ER -