CIF: Small: Collaborative Research: Geometry-aware and data-adaptive signal processing for resource constrained activity analysis CIF: Small: Collaborative Research: Geometry-aware and data-adaptive signal processing for resource constrained activity analysis CCF CIF: Small: Collaborative Research: Geometry-aware and data-adaptive signal processing for resource constrained activity analysis Project Summary Human activities give rise to complex high-dimensional spatio-temporal signatures which are often recorded using traditional sensors such as video, as well as novel emerging sensors such as depth sensors, orientation sensors etc. Classical signal representations and processing tools have significant limitations as applied to human activities, since human activity signals possess several unique structures, such as non-Euclidean feature spaces due to the physical constraints on the features, and invariance to factors such as sensor-placement, execution-rate etc. Traditional approaches for representing and modeling human activity signals in computer vision and signal processing either do not fully exploit such underlying geometric constraints, or address these issues in isolation, thereby lacking a coherent framework and generalizability to new features and sensors. We propose to study foundational representations for human activities that apply to a broad swath of traditional and emerging sensors, spanning differential geometry and signal approximation theory, which together provide new affordances in a theoretically well-grounded framework. The key insights in the proposal are: 1) Geometry-awareness, encompassing both classical Euclidean as well as non-Euclidean feature spaces, 2) Invariances to sensor-placement and execution-rate tightly integrated into the representations, 3) Data adaptivity, leading to low bit-rate representations of human activities for reduced communication and low-computational scenarios. These mathematical techniques will be integrated into real-time sensing, analysis, and feedback systems using devices like the KINECT for home-based management and promotion of health and well-being. Intellectual Merit Future advances in human activity analysis need to take into account the multi-modality and resource constrained nature of environments such as homes. This proposal furthers the understanding of human activities by proposing geometry-aware and data adaptive signal representations which provides a strong theoretical framework leading to several applications involving resource constrained multi-modal human activity sensing and low-complexity analysis. This proposal is a collaborative research between Pavan Turaga and Anuj Srivastava, who together bring expertise in activity analysis, computer vision and Riemannian geometry. The proposed research bridges the areas of computer vision, Riemannian geometry, and signal approximation theory towards creating a strong theoretical framework in which to study low complexity human activity analysis. Broader Impacts The proposal advances fundamental understanding of human activities via a joint signal processing and Riemannian geometric framework. The proposed research agenda will result in graduate education that includes human activity analysis, advanced differential geometric and statistical methods, and experiential system design. The research agenda will provide hands-on undergraduate research experience in system development and testing for undergrad students of the Digital Culture program at Arizona State University. The proposal also aims to create a flexible platform for home-based monitoring of daily activities that will allow engagement with the broader community for a wider-variety of end-applications such as promotion of healthier lifestyles and well-being at homes, and increase awareness of and interest in technology and mathematics. Key Words: Multi-modal activity analysis, Shape manifolds, Joint inference
|Effective start/end date||8/1/13 → 7/31/17|
- National Science Foundation (NSF): $274,812.00
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.