Activity segmentation with special emphasis on sit-to-stand analysis
Metadata[+] Show full item record
In this study, we present algorithms to segment the activities of sitting and standing, and identify the regions of sit-to-stand transitions in a given image sequence. As a means of fall risk assessment, we propose methods to measure sit-to-stand time using the three dimensional modeling of a human body in vowel space as well as ellipse fitting algorithms and image features to capture orientation of the body. Fuzzy clustering methods such as the Gustafson vessel algorithm are also investigated. The proposed algorithms were tested on 9 subjects with ages ranging from 18 to 88. The classification results were the best for the vowel height with the ellipse fit algorithm at 96.6%; using the vowel height alone gave a classification rate of 86.7%. The comparison was done with the marker-based V icon motion capture system as ground truth as well as a manually controlled stop watch. The average error in sit-to-stand time measurement was the best for vowel voxel height with the ellipse fit technique at 270 ms and worst for vowel voxel height alone at 380 ms. This application can be used as a part of a continuous video monitoring system in the homes of older adults and can provide valuable information which could help detect fall risk and enable them to lead an independent life style for a longer time.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License.