Modeling and visualization of human activities for multicamera networks

Aswin C. Sankaranarayanan, Robert Patro, Pavan Turaga, Amitabh Varshney, Rama Chellappa

Research output: Contribution to journalArticle

9 Citations (Scopus)

Abstract

Multicamera networks are becoming complex involving larger sensing areas in order to capture activities and behavior that evolve over long spatial and temporal windows. This necessitates novel methods to process the information sensed by the network and visualize it for an end user. In this paper, we describe a system for modeling and on-demand visualization of activities of groups of humans. Using the prior knowledge of the 3D structure of the scene as well as camera calibration, the system localizes humans as they navigate the scene. Activities of interest are detected by matching models of these activities learnt a priori against the multiview observations. The trajectories and the activity index for each individual summarize the dynamic content of the scene. These are used to render the scene with virtual 3D human models that mimic the observed activities of real humans. In particular, the rendering framework is designed to handle large displays with a cluster of GPUs as well as reduce the cognitive dissonance by rendering realistic weather effects and illumination. We envision use of this system for immersive visualization as well as summarization of videos that capture group behavior.

Original languageEnglish (US)
Article number259860
JournalEurasip Journal on Image and Video Processing
Volume2009
DOIs
StatePublished - 2009
Externally publishedYes

Fingerprint

Visualization
Lighting
Cameras
Display devices
Trajectories
Calibration
Graphics processing unit

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Signal Processing
  • Information Systems

Cite this

Modeling and visualization of human activities for multicamera networks. / Sankaranarayanan, Aswin C.; Patro, Robert; Turaga, Pavan; Varshney, Amitabh; Chellappa, Rama.

In: Eurasip Journal on Image and Video Processing, Vol. 2009, 259860, 2009.

Research output: Contribution to journalArticle

Sankaranarayanan, Aswin C. ; Patro, Robert ; Turaga, Pavan ; Varshney, Amitabh ; Chellappa, Rama. / Modeling and visualization of human activities for multicamera networks. In: Eurasip Journal on Image and Video Processing. 2009 ; Vol. 2009.
@article{7e553e8d2b0849e79a7a99ed515e8e09,
title = "Modeling and visualization of human activities for multicamera networks",
abstract = "Multicamera networks are becoming complex involving larger sensing areas in order to capture activities and behavior that evolve over long spatial and temporal windows. This necessitates novel methods to process the information sensed by the network and visualize it for an end user. In this paper, we describe a system for modeling and on-demand visualization of activities of groups of humans. Using the prior knowledge of the 3D structure of the scene as well as camera calibration, the system localizes humans as they navigate the scene. Activities of interest are detected by matching models of these activities learnt a priori against the multiview observations. The trajectories and the activity index for each individual summarize the dynamic content of the scene. These are used to render the scene with virtual 3D human models that mimic the observed activities of real humans. In particular, the rendering framework is designed to handle large displays with a cluster of GPUs as well as reduce the cognitive dissonance by rendering realistic weather effects and illumination. We envision use of this system for immersive visualization as well as summarization of videos that capture group behavior.",
author = "Sankaranarayanan, {Aswin C.} and Robert Patro and Pavan Turaga and Amitabh Varshney and Rama Chellappa",
year = "2009",
doi = "10.1155/2009/259860",
language = "English (US)",
volume = "2009",
journal = "Eurasip Journal on Image and Video Processing",
issn = "1687-5176",
publisher = "Springer Publishing Company",

}

TY - JOUR

T1 - Modeling and visualization of human activities for multicamera networks

AU - Sankaranarayanan, Aswin C.

AU - Patro, Robert

AU - Turaga, Pavan

AU - Varshney, Amitabh

AU - Chellappa, Rama

PY - 2009

Y1 - 2009

N2 - Multicamera networks are becoming complex involving larger sensing areas in order to capture activities and behavior that evolve over long spatial and temporal windows. This necessitates novel methods to process the information sensed by the network and visualize it for an end user. In this paper, we describe a system for modeling and on-demand visualization of activities of groups of humans. Using the prior knowledge of the 3D structure of the scene as well as camera calibration, the system localizes humans as they navigate the scene. Activities of interest are detected by matching models of these activities learnt a priori against the multiview observations. The trajectories and the activity index for each individual summarize the dynamic content of the scene. These are used to render the scene with virtual 3D human models that mimic the observed activities of real humans. In particular, the rendering framework is designed to handle large displays with a cluster of GPUs as well as reduce the cognitive dissonance by rendering realistic weather effects and illumination. We envision use of this system for immersive visualization as well as summarization of videos that capture group behavior.

AB - Multicamera networks are becoming complex involving larger sensing areas in order to capture activities and behavior that evolve over long spatial and temporal windows. This necessitates novel methods to process the information sensed by the network and visualize it for an end user. In this paper, we describe a system for modeling and on-demand visualization of activities of groups of humans. Using the prior knowledge of the 3D structure of the scene as well as camera calibration, the system localizes humans as they navigate the scene. Activities of interest are detected by matching models of these activities learnt a priori against the multiview observations. The trajectories and the activity index for each individual summarize the dynamic content of the scene. These are used to render the scene with virtual 3D human models that mimic the observed activities of real humans. In particular, the rendering framework is designed to handle large displays with a cluster of GPUs as well as reduce the cognitive dissonance by rendering realistic weather effects and illumination. We envision use of this system for immersive visualization as well as summarization of videos that capture group behavior.

UR - http://www.scopus.com/inward/record.url?scp=76649133794&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=76649133794&partnerID=8YFLogxK

U2 - 10.1155/2009/259860

DO - 10.1155/2009/259860

M3 - Article

AN - SCOPUS:76649133794

VL - 2009

JO - Eurasip Journal on Image and Video Processing

JF - Eurasip Journal on Image and Video Processing

SN - 1687-5176

M1 - 259860

ER -