18 Citations (Scopus)

Abstract

The most successful video-based human action recognition methods rely on feature representations extracted using Convolutional Neural Networks (CNNs). Inspired by the two-stream network (TS-Net), we propose a multi-stream Convolutional Neural Network (CNN) architecture to recognize human actions. We additionally consider human-related regions that contain the most informative features. First, by improving foreground detection, the region of interest corresponding to the appearance and the motion of an actor can be detected robustly under realistic circumstances. Based on the entire detected human body, we construct one appearance and one motion stream. In addition, we select a secondary region that contains the major moving part of an actor based on motion saliency. By combining the traditional streams with the novel human-related streams, we introduce a human-related multi-stream CNN (HR-MSCNN) architecture that encodes appearance, motion, and the captured tubes of the human-related regions. Comparative evaluation on the JHMDB, HMDB51, UCF Sports and UCF101 datasets demonstrates that the streams contain features that complement each other. The proposed multi-stream architecture achieves state-of-the-art results on these four datasets.

Original languageEnglish (US)
Pages (from-to)32-43
Number of pages12
JournalPattern Recognition
Volume79
DOIs
StatePublished - Jul 1 2018

Fingerprint

Neural networks
Network architecture
Sports

Keywords

  • Action recognition
  • Convolutional Neural Network
  • Motion salient region
  • Multi-Stream

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Cite this

Multi-stream CNN : Learning representations based on human-related regions for action recognition. / Tu, Zhigang; Xie, Wei; Qin, Qianqing; Poppe, Ronald; Veltkamp, Remco C.; Li, Baoxin; Yuan, Junsong.

In: Pattern Recognition, Vol. 79, 01.07.2018, p. 32-43.

Research output: Contribution to journalArticle

Tu, Zhigang ; Xie, Wei ; Qin, Qianqing ; Poppe, Ronald ; Veltkamp, Remco C. ; Li, Baoxin ; Yuan, Junsong. / Multi-stream CNN : Learning representations based on human-related regions for action recognition. In: Pattern Recognition. 2018 ; Vol. 79. pp. 32-43.
@article{93dd70048cc349768aa047209dca75e9,
title = "Multi-stream CNN: Learning representations based on human-related regions for action recognition",
abstract = "The most successful video-based human action recognition methods rely on feature representations extracted using Convolutional Neural Networks (CNNs). Inspired by the two-stream network (TS-Net), we propose a multi-stream Convolutional Neural Network (CNN) architecture to recognize human actions. We additionally consider human-related regions that contain the most informative features. First, by improving foreground detection, the region of interest corresponding to the appearance and the motion of an actor can be detected robustly under realistic circumstances. Based on the entire detected human body, we construct one appearance and one motion stream. In addition, we select a secondary region that contains the major moving part of an actor based on motion saliency. By combining the traditional streams with the novel human-related streams, we introduce a human-related multi-stream CNN (HR-MSCNN) architecture that encodes appearance, motion, and the captured tubes of the human-related regions. Comparative evaluation on the JHMDB, HMDB51, UCF Sports and UCF101 datasets demonstrates that the streams contain features that complement each other. The proposed multi-stream architecture achieves state-of-the-art results on these four datasets.",
keywords = "Action recognition, Convolutional Neural Network, Motion salient region, Multi-Stream",
author = "Zhigang Tu and Wei Xie and Qianqing Qin and Ronald Poppe and Veltkamp, {Remco C.} and Baoxin Li and Junsong Yuan",
year = "2018",
month = "7",
day = "1",
doi = "10.1016/j.patcog.2018.01.020",
language = "English (US)",
volume = "79",
pages = "32--43",
journal = "Pattern Recognition",
issn = "0031-3203",
publisher = "Elsevier Limited",

}

TY - JOUR

T1 - Multi-stream CNN

T2 - Learning representations based on human-related regions for action recognition

AU - Tu, Zhigang

AU - Xie, Wei

AU - Qin, Qianqing

AU - Poppe, Ronald

AU - Veltkamp, Remco C.

AU - Li, Baoxin

AU - Yuan, Junsong

PY - 2018/7/1

Y1 - 2018/7/1

N2 - The most successful video-based human action recognition methods rely on feature representations extracted using Convolutional Neural Networks (CNNs). Inspired by the two-stream network (TS-Net), we propose a multi-stream Convolutional Neural Network (CNN) architecture to recognize human actions. We additionally consider human-related regions that contain the most informative features. First, by improving foreground detection, the region of interest corresponding to the appearance and the motion of an actor can be detected robustly under realistic circumstances. Based on the entire detected human body, we construct one appearance and one motion stream. In addition, we select a secondary region that contains the major moving part of an actor based on motion saliency. By combining the traditional streams with the novel human-related streams, we introduce a human-related multi-stream CNN (HR-MSCNN) architecture that encodes appearance, motion, and the captured tubes of the human-related regions. Comparative evaluation on the JHMDB, HMDB51, UCF Sports and UCF101 datasets demonstrates that the streams contain features that complement each other. The proposed multi-stream architecture achieves state-of-the-art results on these four datasets.

AB - The most successful video-based human action recognition methods rely on feature representations extracted using Convolutional Neural Networks (CNNs). Inspired by the two-stream network (TS-Net), we propose a multi-stream Convolutional Neural Network (CNN) architecture to recognize human actions. We additionally consider human-related regions that contain the most informative features. First, by improving foreground detection, the region of interest corresponding to the appearance and the motion of an actor can be detected robustly under realistic circumstances. Based on the entire detected human body, we construct one appearance and one motion stream. In addition, we select a secondary region that contains the major moving part of an actor based on motion saliency. By combining the traditional streams with the novel human-related streams, we introduce a human-related multi-stream CNN (HR-MSCNN) architecture that encodes appearance, motion, and the captured tubes of the human-related regions. Comparative evaluation on the JHMDB, HMDB51, UCF Sports and UCF101 datasets demonstrates that the streams contain features that complement each other. The proposed multi-stream architecture achieves state-of-the-art results on these four datasets.

KW - Action recognition

KW - Convolutional Neural Network

KW - Motion salient region

KW - Multi-Stream

UR - http://www.scopus.com/inward/record.url?scp=85044306276&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85044306276&partnerID=8YFLogxK

U2 - 10.1016/j.patcog.2018.01.020

DO - 10.1016/j.patcog.2018.01.020

M3 - Article

VL - 79

SP - 32

EP - 43

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

ER -