Abstract

Despite outstanding performance in image recognition, convolutional neural networks (CNNs) do not yet achieve the same impressive results on action recognition in videos. This is partially due to the inability of CNN for modeling long-range temporal structures especially those involving individual action stages that are critical to human action recognition. In this paper, we propose a novel action-stage (ActionS) emphasized spatiotemporal vector of locally aggregated descriptors (ActionS-ST-VLAD) method to aggregate informative deep features across the entire video according to adaptive video feature segmentation and adaptive segment feature sampling (AVFS-ASFS). In our ActionS-ST-VLAD encoding approach, by using AVFS-ASFS, the keyframe features are chosen and the corresponding deep features are automatically split into segments with the features in each segment belonging to a temporally coherent ActionS. Then, based on the extracted keyframe feature in each segment, a flow-guided warping technique is introduced to detect and discard redundant feature maps, while the informative ones are aggregated by using our exploited similarity weight. Furthermore, we exploit an RGBF modality to capture motion salient regions in the RGB images corresponding to action activity. Extensive experiments are conducted on four public benchmarks-HMDB51, UCF101, Kinetics, and ActivityNet for evaluation. Results show that our method is able to effectively pool useful deep features spatiotemporally, leading to the state-of-The-Art performance for video-based action recognition.

Original languageEnglish (US)
Article number8600333
Pages (from-to)2799-2812
Number of pages14
JournalIEEE Transactions on Image Processing
Volume28
Issue number6
DOIs
StatePublished - Jun 1 2019

Fingerprint

Sampling
Neural networks
Image recognition
Kinetics
Experiments

Keywords

  • Action recognition
  • ActionS-ST-VLAD
  • adaptive feature sampling
  • adaptive video feature segmentation
  • feature encoding

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

Action-Stage Emphasized Spatiotemporal VLAD for Video Action Recognition. / Tu, Zhigang; Li, Hongyan; Zhang, Dejun; Dauwels, Justin; Li, Baoxin; Yuan, Junsong.

In: IEEE Transactions on Image Processing, Vol. 28, No. 6, 8600333, 01.06.2019, p. 2799-2812.

Research output: Contribution to journalArticle

Tu, Zhigang ; Li, Hongyan ; Zhang, Dejun ; Dauwels, Justin ; Li, Baoxin ; Yuan, Junsong. / Action-Stage Emphasized Spatiotemporal VLAD for Video Action Recognition. In: IEEE Transactions on Image Processing. 2019 ; Vol. 28, No. 6. pp. 2799-2812.
@article{bfed440fb7e14b6c82cda4c245c75eec,
title = "Action-Stage Emphasized Spatiotemporal VLAD for Video Action Recognition",
abstract = "Despite outstanding performance in image recognition, convolutional neural networks (CNNs) do not yet achieve the same impressive results on action recognition in videos. This is partially due to the inability of CNN for modeling long-range temporal structures especially those involving individual action stages that are critical to human action recognition. In this paper, we propose a novel action-stage (ActionS) emphasized spatiotemporal vector of locally aggregated descriptors (ActionS-ST-VLAD) method to aggregate informative deep features across the entire video according to adaptive video feature segmentation and adaptive segment feature sampling (AVFS-ASFS). In our ActionS-ST-VLAD encoding approach, by using AVFS-ASFS, the keyframe features are chosen and the corresponding deep features are automatically split into segments with the features in each segment belonging to a temporally coherent ActionS. Then, based on the extracted keyframe feature in each segment, a flow-guided warping technique is introduced to detect and discard redundant feature maps, while the informative ones are aggregated by using our exploited similarity weight. Furthermore, we exploit an RGBF modality to capture motion salient regions in the RGB images corresponding to action activity. Extensive experiments are conducted on four public benchmarks-HMDB51, UCF101, Kinetics, and ActivityNet for evaluation. Results show that our method is able to effectively pool useful deep features spatiotemporally, leading to the state-of-The-Art performance for video-based action recognition.",
keywords = "Action recognition, ActionS-ST-VLAD, adaptive feature sampling, adaptive video feature segmentation, feature encoding",
author = "Zhigang Tu and Hongyan Li and Dejun Zhang and Justin Dauwels and Baoxin Li and Junsong Yuan",
year = "2019",
month = "6",
day = "1",
doi = "10.1109/TIP.2018.2890749",
language = "English (US)",
volume = "28",
pages = "2799--2812",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "6",

}

TY - JOUR

T1 - Action-Stage Emphasized Spatiotemporal VLAD for Video Action Recognition

AU - Tu, Zhigang

AU - Li, Hongyan

AU - Zhang, Dejun

AU - Dauwels, Justin

AU - Li, Baoxin

AU - Yuan, Junsong

PY - 2019/6/1

Y1 - 2019/6/1

N2 - Despite outstanding performance in image recognition, convolutional neural networks (CNNs) do not yet achieve the same impressive results on action recognition in videos. This is partially due to the inability of CNN for modeling long-range temporal structures especially those involving individual action stages that are critical to human action recognition. In this paper, we propose a novel action-stage (ActionS) emphasized spatiotemporal vector of locally aggregated descriptors (ActionS-ST-VLAD) method to aggregate informative deep features across the entire video according to adaptive video feature segmentation and adaptive segment feature sampling (AVFS-ASFS). In our ActionS-ST-VLAD encoding approach, by using AVFS-ASFS, the keyframe features are chosen and the corresponding deep features are automatically split into segments with the features in each segment belonging to a temporally coherent ActionS. Then, based on the extracted keyframe feature in each segment, a flow-guided warping technique is introduced to detect and discard redundant feature maps, while the informative ones are aggregated by using our exploited similarity weight. Furthermore, we exploit an RGBF modality to capture motion salient regions in the RGB images corresponding to action activity. Extensive experiments are conducted on four public benchmarks-HMDB51, UCF101, Kinetics, and ActivityNet for evaluation. Results show that our method is able to effectively pool useful deep features spatiotemporally, leading to the state-of-The-Art performance for video-based action recognition.

AB - Despite outstanding performance in image recognition, convolutional neural networks (CNNs) do not yet achieve the same impressive results on action recognition in videos. This is partially due to the inability of CNN for modeling long-range temporal structures especially those involving individual action stages that are critical to human action recognition. In this paper, we propose a novel action-stage (ActionS) emphasized spatiotemporal vector of locally aggregated descriptors (ActionS-ST-VLAD) method to aggregate informative deep features across the entire video according to adaptive video feature segmentation and adaptive segment feature sampling (AVFS-ASFS). In our ActionS-ST-VLAD encoding approach, by using AVFS-ASFS, the keyframe features are chosen and the corresponding deep features are automatically split into segments with the features in each segment belonging to a temporally coherent ActionS. Then, based on the extracted keyframe feature in each segment, a flow-guided warping technique is introduced to detect and discard redundant feature maps, while the informative ones are aggregated by using our exploited similarity weight. Furthermore, we exploit an RGBF modality to capture motion salient regions in the RGB images corresponding to action activity. Extensive experiments are conducted on four public benchmarks-HMDB51, UCF101, Kinetics, and ActivityNet for evaluation. Results show that our method is able to effectively pool useful deep features spatiotemporally, leading to the state-of-The-Art performance for video-based action recognition.

KW - Action recognition

KW - ActionS-ST-VLAD

KW - adaptive feature sampling

KW - adaptive video feature segmentation

KW - feature encoding

UR - http://www.scopus.com/inward/record.url?scp=85063468385&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063468385&partnerID=8YFLogxK

U2 - 10.1109/TIP.2018.2890749

DO - 10.1109/TIP.2018.2890749

M3 - Article

VL - 28

SP - 2799

EP - 2812

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 6

M1 - 8600333

ER -