Introduction
It is challenging to routinely assess gait unless dedicated measuring devices are available. Inspired by a recent study that reported high classification performance of activity recognition tasks using smartwatches [1], we hypothesized that the recognition of gait-related activities in older adults can be formulated as a supervised learning problem. To quantify the complex gait motion, we focused on hand motion because disturbed hand motions are frequently reported as typical symptoms of neurodegenerative diseases [2].
Methods
Data Acquisition
We recruited 39 older adult participants (age: 80.4, SD 6.5 years; n=38, 73.7% women) from a local community. The number of participants for each class was as follows: cane-assisted gait (C0) (n=7), walker-assisted gait (C1) (n=5), gait with disturbances (C2) (n=21), gait without disturbances (C3) (n=6), and gait without disturbances in young controls (C4) (n=12). During the experiment, participants were asked to wear a smartwatch (DW9F1; Fossil Group, Inc) on each wrist and walk at a normal speed similar to their usual walk. Figure 1 shows example photographs taken during the experiment.
Figure 1. Five different gait styles: cane-assisted gait (C0), walker-assisted gait (C1), gait with disturbances (C2), gait without disturbances (C3), and gait without disturbances in young controls (C4).
Classification
The multivariate time-series (MTS) signals captured at a sampling rate of 50 Hz were segmented into . Here, represents the inertial motion at a specific moment, t. In this study, D was 12 (=6×2), since each smartwatch measures the 6-DOF (6 degrees of freedom) motion separately, and T was 100 (approximately 2s) so that each x could contain at least a full gait cycle. The task in our study was to infer the type of gait activity, , where C was 5. Our neural network systems, tailored to learn gait features from MTS data, were trained in an end-to-end fashion using state-of-the-art deep learning architectures, including Conv1D [3], long short-term memory (LSTM) [4], and an LSTM with an attention mechanism [5].
Ethics Approval
All participants were enrolled after institutional review board (IRB) approval (Sungkyunkwan University IRB approval number: SKKU 2021-12-014).
Results
We employed the accuracy and macro average of the F1-score, Fm, as a measure of performance. For the both-hands condition, the accuracy (Fm) was 0.9757 (0.9728), 0.9736 (0.9699), and 0.9771 (0.9738) when Conv1D, LSTM, and attention-based LSTM were employed, respectively. In the case of the left-hand and right-hand conditions, the accuracies (Fm) obtained in the left-hand condition were 0.9652 (0.9623), 0.9611 (0.9583), and 0.9630 (0.9592), respectively. In the right-hand condition, the accuracies (Fm) were 0.9724 (0.9706), 0.9673 (0.9643), and 0.9673 (0.9635) for the same employed models, respectively. We also examined the learned representations as shown in Figure 2 using t-distributed stochastic neighbor embedding (t-SNE) [6], which visualizes the high-dimensional vectors by projecting them into a 2D space in such a way that similar points cluster together.
Figure 2. Feature visualization using t-distributed stochastic neighbor embedding. Each point is colored according to the predicted class. LSTM: long short-term memory.
Discussion
The experimental results demonstrated an acceptable classification performance (ie, both accuracy and the Fm score were higher than 0.95). However, there is systematic confusion, such as recognizing C3 as C2 (0.03-0.04 for the left hand, 0.05-0.07 for the right hand, and 0.05-0.06 for both hands, respectively) as shown in Figure 2 (see the region highlighted in black). It is noteworthy that the classification performance of the single-hand condition was similar to that of the both-hands condition, suggesting that wearing a single smartwatch is sufficient for the proposed gait assessment task. From the t-SNE plot, it was found that points from the LSTM and attention-based LSTM exhibit a more clustered distribution than those from the Conv1D model. We expect that the proposed approach can be applied to various health care applications for older adults (eg, wearable detection of gait disturbances).
Acknowledgments
This work was supported by a grant from the National Research Foundation of Korea (#NRF-2020R1C1C1010666). This work was also supported by Sungkyunkwan University and the BK21 FOUR (Graduate School Innovation) funded by the Ministry of Education (Korea) and the National Research Foundation of Korea.
Authors' Contributions
SCK and BO were responsible for the study concept and design; SCK and HK were involved in development; SCK, HJK, and JP conducted the analysis and interpreted the data; HK provided the visualizations; and all authors helped write the manuscript.
Conflicts of Interest
None declared.
References
1. Kim H, Kim HJ, Park J, Ryu JK, Kim SC. Recognition of fine-grained walking patterns using a smartwatch with deep attentive neural networks. Sensors (Basel) 2021 Sep 24;21(19):6393 [
Kim H, Kim HJ, Park J, Ryu JK, Kim SC. Recognition of fine-grained walking patterns using a smartwatch with deep attentive neural networks. Sensors (Basel) 2021 Sep 24;21(19):6393
Snijders AH, van de Warrenburg BP, Giladi N, Bloem BR. Neurological gait disorders in elderly people: clinical approach and classification. Lancet Neurol 2007 Jan;6(1):63-74.
Kim Y. Convolutional neural networks for sentence classification. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2014 Presented at: EMNLP; October 25-29; Doha, Qatar p. 1746-1751 URL: https://aclanthology.org/D14-1181.pdf
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput 1997 Nov 15;9(8):1735-1780.
Luong MT, Pham H, Manning CD. Effective approaches to attention-based neural machine translation. ACL Anthology 2015:1412-1421
Van der Maaten L, Hinton G. Visualizing data using t-SNE. JMLR 2008;9:2579-2605
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
It is challenging to routinely assess gait unless dedicated measuring devices are available. Inspired by a recent study that reported high classification performance of activity recognition tasks using smartwatches [1], we hypothesized that the recognition of gait-related activities in older adults can be formulated as a supervised learning problem. To quantify the complex gait motion, we focused on hand motion because disturbed hand motions are frequently reported as typical symptoms of neurodegenerative diseases [2].
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer