Published in

MDPI, Proceedings of the Royal Society of Victoria, 19(2), p. 1236, 2018

DOI: 10.3390/proceedings2191236

Links

Tools

Export citation

Search in Google Scholar

Detection of Falls from Non-Invasive Thermal Vision Sensors Using Convolutional Neural Networks

This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

In this work, we detail a methodology based on Convolutional Neural Networks (CNNs) to detect falls from non-invasive thermal vision sensors. First, we include an agile data collection to label images in order to create a dataset that describes several cases of single and multiple occupancy. These cases include standing inhabitants and target situations with a fallen inhabitant. Second, we provide data augmentation techniques to increase the learning capabilities of the classification and reduce the configuration time. Third, we have defined 3 types of CNN to evaluate the impact that the number of layers and kernel size have on the performance of the methodology. The results show an encouraging performance in single-occupancy contexts, with up to 92 % of accuracy, but a 10 % of reduction in accuracy in multiple-occupancy. The learning capabilities of CNNs have been highlighted due to the complex images obtained from the low-cost device. These images have strong noise as well as uncertain and blurred areas. The results highlight that the CNN based on 3-layers maintains a stable performance, as well as quick learning.