Learning Bag of Spatio-Temporal Features for Human Interaction Recognition

Affiliation auteurs!!!! Error affiliation !!!!
TitreLearning Bag of Spatio-Temporal Features for Human Interaction Recognition
Type de publicationConference Paper
Year of Publication2020
AuteursSlimani KNour El Ho, Benezeth Y, Souami F
EditorOsten W, Nikolaev D, Zhou J
Conference NameTWELFTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2019)
PublisherUniv Elect Sci & Technol China; Halmstad Univ; Univ Barcelona; Amer Sci & Engn Inst
Conference Location1000 20TH ST, PO BOX 10, BELLINGHAM, WA 98227-0010 USA
ISBN Number978-1-5106-3644-6
Mots-clés3D-SIFT, Bag of Visual Words, Edge-based region, Human interaction, MSER, Sum of Histograms, SVM
Résumé

Bag of Visual Words Model (BoVW) has achieved impressive performance on human activity recognition. However, it is extremely difficult to capture high-level semantic meanings behind video features with this method as the spatiotemporal distribution of visual words is ignored, preventing localization of the interactions within a video. In this paper, we propose a supervised learning framework that automatically recognizes high-level human interaction based on a bag of spatiotemporal visual features. At first, a representative baseline keyframe that captures the major body parts of the interacting persons is selected and the bounding boxes containing persons are extracted to parse the poses of all persons in the interaction. Based on this keyframe, features are detected by combining edge features and Maximally Stable Extremal Regions (MSER) features for each interacting person and backward-forward tracked over the entire video sequence. Based on feature tracks, 3D XYT spatiotemporal volumes are generated for each interacting target. Then, the K-means algorithm is used to build a codebook of visual features to represent a given interaction. The interaction is then represented by the sum of the frequency occurrence of visual words between persons. Extensive experimental evaluations on the UT-interaction dataset demonstrate the strength of our method to recognize the ongoing interactions from videos with a simple implementation.

DOI10.1117/12.2559268