Paper
28 April 2010 Human action recognition using extreme learning machine via multiple types of features
Rashid Minhas, Aryaz Baradarani, Sepideh Seifzadeh, Q. M. Jonathan Wu
Author Affiliations +
Abstract
This paper introduces a human actions recognition framework based on multiple types of features. Taking the advantage of motion-selectivity property of 3D dual-tree complex wavelet transform (3D DT-CWT) and affine SIFT local image detector, firstly spatio-temporal and local static features are extracted. No assumptions of scene background, location, objects of interest, or point of view information are made whereas bidirectional two-dimensional PCA (2D-PCA) is employed for dimensionality reduction which offers enhanced capabilities to preserve structure and correlation amongst neighborhood pixels of a video frame. The proposed technique is significantly faster than traditional methods due to volumetric processing of input video, and offers a rich representation of human actions in terms of reduction in artifacts. Experimental examples are given to illustrate the effectiveness of the approach.
© (2010) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Rashid Minhas, Aryaz Baradarani, Sepideh Seifzadeh, and Q. M. Jonathan Wu "Human action recognition using extreme learning machine via multiple types of features", Proc. SPIE 7708, Mobile Multimedia/Image Processing, Security, and Applications 2010, 770808 (28 April 2010); https://doi.org/10.1117/12.853031
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Wavelets

Principal component analysis

Wavelet transforms

Bismuth

Feature extraction

Sensors

Back to Top