Action recognition via sequence embedding
Abstract
[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] A comb structural exemplar embedding based approach is introduced for action recognition. We propose a new framework to represent an action as a weak classifier pool. During training, firstly, construct a set of static comb structural exemplars from training data; then convolve each exemplar on the training action video; later on, construct a weak classifier pool from minimum distances between the templates and the action sequence. In order to capture both shape and motion features, we employ three different kinds of image representation method, such as edge detection, Histogram of Oriented Gradients (HOG) and Histogram of Optical Flow (HOF). After capturing shape and motion features, salient weak classifiers are picked up by AdaBoost algorithm. Our approach enables robust action recognition in very challenging situations and the framework is validated based on four public standard datasets: the Weizmann dataset, the KTH dataset, IXMAS multi-view dataset and Rochester. Our extensive experimental results from those four datasets are state-of-the-art in terms of performance, tolerance to noise and viewpoints, and robustness across different subjects and datasets.
Degree
M.S.
Thesis Department
Rights
Access to files is limited to the University of Missouri--Columbia.