Full text

Turn on search term navigation

© 2025 Taramasco et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: https://creativecommons.org/licenses/by-nc/4.0 (the “License”), which permits using, remixing, and building upon the work non-commercially, as long as it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ) and either DOI or URL of the article must be cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

The necessity for effective automatic fall detection mechanisms in older adults is driven by the growing demographic of elderly individuals who are at substantial health risk from falls, particularly when residing alone. Despite the existence of numerous fall detection systems (FDSs) that utilize machine learning and predictive modeling, accurately distinguishing between everyday activities and genuine falls continues to pose significant challenges, exacerbated by the varied nature of residential settings. Adaptable solutions are essential to cater to the diverse conditions under which falls occur. In this context, sensor fusion emerges as a promising solution, harnessing the unique physical properties of falls. The success of developing effective detection algorithms is dependent on the availability of comprehensive datasets that integrate data from multiple synchronized sensors. Our research introduces a novel multisensor dataset designed to support the creation and evaluation of advanced multisensor fall detection algorithms. This dataset was compiled from simulations of ten different fall types by ten participants, ensuring a wide array of scenarios. Data were collected using four types of sensors: a mobile phone equipped with a single-channel, three-dimensional accelerometer; a far infrared (FIR) thermal camera; an $8×8$ LIDAR; and a 60–64 GHz radar. These sensors were selected for their combined effectiveness in capturing detailed aspects of fall events while mitigating privacy issues linked to visual recordings. Characterization of the dataset was undertaken using two key metrics: the instantaneous norm of the signal and the temporal difference between consecutive frames. This analysis highlights the distinct variations between fall and non-fall events across different sensors and signal characteristics. Through the provision of this dataset, our objective is to facilitate the development of sensor fusion algorithms that surpass the accuracy and reliability of traditional single-sensor FDSs.

Details

Title
Multimodal dataset for sensor fusion in fall detection
Author
Taramasco, Carla; Pineiro, Miguel; Ormeño-Arriagada, Pablo; Robles, Diego; Araya, David
Publication year
2025
Publication date
Apr 1, 2025
Publisher
PeerJ, Inc.
e-ISSN
23765992
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3239333889
Copyright
© 2025 Taramasco et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: https://creativecommons.org/licenses/by-nc/4.0 (the “License”), which permits using, remixing, and building upon the work non-commercially, as long as it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ) and either DOI or URL of the article must be cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.