Fall detection using acoustic features and one class classifiers
Metadata[+] Show full item record
With the increasing of elderly population, there are more and more health problems occurring in everyday activities among this group of people. An investigation shows many elderly people get injures or trigger more serious health problems due to falling on the floor at their home or hospitals without artificial monitoring. There are many techniques to monitor the fall remotely and provide assistance as soon as possible. For this purpose video cameras are deployed at the place of living of an elderly but his might lead to an uncomfortable feeling of being spied on, hence we try to use just the sound (mainly frequency domain features) instead of video to detect a fall remotely. Sound signal is collected for a falling person along with normal everyday sounds, a classifier is trained using these sounds and using these classifiers we try to do the classification of an unknown sound as fall or non-fall. The next problem though is how to collect exact sound sample of a falling person as that is the first thing we want to avoid. In this work therefore we try to train our classifier using data from only one of the two classes which is the sound samples of normal everyday sounds only. These classifiers are called one class classifiers. We compare the performance of these classifiers with the conventional two class classifiers which uses examples from both the classes by testing both on the same dataset. Acoustic feature that we use to do classification is Mel Frequency Cepstrum coefficients or MFCC in short. We also test other spectrum based features like Energy Ratio Sub-band, Band-width and centroid Frequency.