1. Introduction
In recent years, the utilization of WiFi channel state information (CSI) for human action recognition has garnered significant attention due to its non-intrusive nature and potential for application in various fields such as healthcare monitoring [1], security surveillance [2], and human–computer interaction [3]. Leveraging advanced machine learning techniques, particularly convolutional neural networks (CNNs), researchers have reported remarkable accuracy rates in identifying human actions solely based on WiFi CSI data.
Among the plethora of studies in this domain, a notable IEEE Sensors Journal paper [4] claimed an exceptional accuracy rate of 99% of WiFi CSI-based human action recognition using CNNs. Such a high accuracy rate holds immense promise for practical applications, potentially revolutionizing how human actions are monitored and analyzed in various contexts. However, beneath the surface of these seemingly groundbreaking results lies a critical concern: data leakage. Data leakage, often overlooked or underestimated, poses a significant threat to the integrity and reliability of machine learning models. In the context of WiFi CSI-based human action recognition, data leakage can occur when information from the test set inadvertently leaks into the training process, leading to inflated performance metrics and misleading conclusions.
This study makes several significant contributions to the field of WiFi CSI-based human action recognition using machine/deep learning, particularly in identifying and addressing data leakage.
-
Detection of data leakage: Our study successfully detects and analyzes instances of data leakage within the experimental methodology of a prominent IEEE Sensors Journal publication [4]. By meticulously examining the data partitioning methods, performance metrics, and model behavior reported in the original study, we identify inconsistencies and anomalies indicative of data leakage.
-
Empirical validation: Through empirical validation and meticulous scrutiny of the dataset and experimental procedures, we provide concrete evidence to support our assertion of data leakage in the original study.
-
Recommendations for mitigation: Building upon our findings, we propose practical recommendations for mitigating data leakage and enhancing the integrity of WiFi CSI-based human action recognition using machine/deep learning.
This paper is structured as follows: In Section 2, we provide a brief overview of the background literature related to WiFi CSI-based human action recognition and the significance of addressing data leakage. Section 3 outlines the methodology employed in the original study, followed by a detailed examination of the experimental setup and results in Section 4. In Section 5, we present our critical analysis of the findings, highlighting instances of data leakage and their impact on the reported accuracy rates. Finally, we conclude the paper in Section 6 by summarizing our key findings, discussing the implications of data leakage in WiFi CSI-based human action recognition, and suggesting avenues for future research.
2. Related Work
2.1. Human Action Recognition Based on WiFi Channel State Information
Human action recognition (HAR) algorithms may utilize different data modalities [5], such as RGB image [6], skeleton [7], depth [8], infrared [9], point cloud [10], event stream [11], audio [12], acceleration [13], radar [14], and WiFi [15]. Further, a significant amount of studies focuses on the fusion of different modalities for HAR [16,17,18,19]. In this section, we review methods utilizing WiFi CSI for HAR. WiFi CSI refers to the data obtained by monitoring the changes in the wireless channel characteristics between a transmitter (such as a WiFi access point) and a receiver (such as a WiFi-enabled device). This information includes various parameters such as signal strength, signal phase, signal-to-noise ratio (SNR), and other channel properties. WiFi CSI is collected by specialized hardware such as software-defined radios (SDRs) or WiFi chipsets that support CSI reporting. It provides detailed insights into the wireless channel’s behavior, allowing for advanced signal processing techniques and analysis. Researchers and engineers use WiFi CSI for various purposes, such as channel estimation [20], localization [21], gesture recognition [22], activity recognition [23], or wireless networking research [24]. WiFi CSI can be used for human action recognition due to the following reasons:
Effect of human actions on wireless signals: Human actions, such as gestures or movements, can cause changes in the wireless channel characteristics due to blockage, reflection, or absorption of the WiFi signals. These changes are reflected in the WiFi CSI measurements.
Distinctive patterns in CSI: Different human actions result in characteristic patterns in the WiFi CSI data. For example, a specific gesture may cause a sudden drop or fluctuation in signal strength or phase, which can be detected and recognized through signal processing techniques.
Machine learning algorithms: Advanced machine learning algorithms can be trained to recognize specific human actions based on patterns observed in WiFi CSI data. By collecting the labeled data of WiFi CSI corresponding to different human actions, classifiers can be trained to accurately recognize and classify these actions in real time.
Due to the availability of WiFi signals, many WiFi CSI based systems for HAR have been proposed in the literature recently. Recent studies on WiFi CSI-based human action recognition have introduced various methods to improve accuracy and address different challenges. For instance, Wang et al. [25] discuss a device-free fall detection system called WiFall, which leverages wireless signal propagation models and CSI to detect falls without the need for wearable devices. Specifically, it employs the time variability and special diversity of CSI. The system consists of a two-phase detection architecture: a local outlier factor-based algorithm to identify abnormal CSI series and activity classification using one-class support vector machine (SVM) [26] to distinguish falls from other human activities. In contrast, the detection of large-scale human movements was the goal in the WiSee [27], WiTrack [28], Wi-Vi [29], and E-eyes [15] projects. Specifically, Pu et al. [27] extracted human gesture and motion information from wireless signals using the Doppler shift property, which results in a pattern of frequency shifts at the wireless receiver when a user performs a gesture or moves. WiSee also maps these Doppler shifts to gestures by leveraging the continuous nature of human gestures and classifying them using a binary pattern-matching algorithm. Additionally, the system works effectively in the presence of multiple users by utilizing multiple-input multiple-output capabilities to focus on gestures and motion from a specific user. In the WiTrack [28] project, 3D human motion tracking was carried out based on radio frequency reflections from a human body. Further, WiTrack can also provide a coarse tracking of larger human parts, such as legs or arms. In the Wi-Vi [29] project, Adib et al. demonstrated that the detection of human movements is also possible behind walls and doors in a closed room. In the E-eyes [15] project, researchers introduced a low-cost system for identifying activities in the home environments using WiFi access points and devices. The system uses the cumulative moving variance of CSI samples to determine the presence of walking or in-place activities. For activity identification, it employs dynamic time warping [30] for walking activities and the earth mover’s distance [31] for in-place activities, comparing the testing CSI measurements to known activity profiles. In [32], Halperin et al. issued a publicly available tool for WiFi collection and processing according to 802.11n standard [33] for a specific Intel chipset. Alternatives for Atheros chip sets were provided by Xie et al. [34] and Tsakalaki and Schäfer [35].
Recent studies in the field of WiFi CSI-based human action recognition has heavily utilized different deep-learning architectures and techniques. For instance, Chen et al. [36] applied Bi-LSTM and attention mechanism to learn from CSI amplitude and phase characteristics. Similarly, Guo et al. [37] applied an LSTM network but it was combined with a CNN. In contrast, Zhang et al. [38] proposed adversative auto-encoder networks for CSI signal security. Jiang et al.’s [39] consisted of three main components: the feature extractor, activity recognizer, and domain discriminator. The feature extractor, a CNN, collaborates with the activity recognizer to recognize human activities and simultaneously aims to fool the domain discriminator to learn environment/subject-independent representations. The domain discriminator is designed to identify the environment where activities are recorded, forcing the feature extractor to produce environment-independent activity features. Zhu et al. [40] combined casual and dilated convolution to implement a temporal convolution network.
2.2. Data Leakage in Machine Learning Models
Research on data leakage in machine learning models spans a variety of contexts and methodologies [41]. Data leakage occurs when information from the test set unintentionally influences the training process, leading to inflated performance metrics and misleading conclusions. This can happen due to various reasons, including improper data partitioning, feature engineering, or preprocessing techniques. Data leakage undermines the generalizability of machine learning models, as they may learn spurious correlations rather than genuine patterns in the data. The consequences of data leakage may extend beyond the realm of machine learning algorithms. In fields such as healthcare, finance, and security, relying on models affected by data leakage can have dire consequences [42,43,44]. Misguided decisions based on inaccurate predictions can result in financial losses, compromised patient care, or breaches in security protocols [45,46,47]. In [48], Poldrack et al. pointed out that a number of papers in the field of neuroimaging may have suffered from data leakage by performing dimensionality reduction across the whole dataset before the train/test split. Kapoor and Narayanan [49] identified eight types of leakage, i.e., not having a separate test set, preprocessing on the training and test sets, feature selection jointly on the training and test sets, duplicate data points, illegitimate features, temporal leakage, non-independence between the training and test sets, and sampling bias. Further, the authors identified 329 studies across 17 fields containing data leakage.
3. Methodology
The HAR framework, ImgFi [4], addresses the challenges of recognizing human activities using WiFi CSI data by converting the information into images and applying a CNN as an image classificator for improved performance—as illustrated in Figure 1. By introducing five CSI imaging approaches, i.e., recurrence plot (RP) transformation [50], Gramian angular summation field (GASF) transformation [51], Gramian angular difference field (GADF) transformation [51], Markov transition field (MTF) transformation [52], and short-time Fourier transformation (STFT) [53], the framework demonstrates the advantage and limitation of each method in processing CSI imaging. Since the authors of [4] found that RP slightly outperforms GASF, GADF, and STFT and significantly outperform MTF, the authors applied RP as CSI imaging in ImgFi. It follows that RP was also applied to our analysis and reimplementation of ImgFi. RP transformation is a method used in time series analysis and nonlinear dynamics to visualize the recurrence behavior of a dynamical system [54]. It is particularly useful for detecting hidden patterns, periodicities, and other nonlinear structures within time series data.
3.1. Structure of ImgFi
The structure of the proposed ImgFi CNN model is depicted in Figure 2. This model was reimplemented in PyTorch [55] strictly following the authors’ description. As it can be seen in Figure 2, ImgFi consists of four convolutional layers and a fully connected layer which is the final classifier in this structure. Furthermore, the authors implemented batch normalization [56], activation—rectified linear units (ReLU)—, and max pooling layer between every two convolutional layers.
Besides the implementation of ImgFi, the authors [4] gave the performance of several on ImageNet database pretrained CNNs, such as Shufflenet [57], VGG19 [58], ResNet18 [59], and ResNet50 [59]. ShuffleNet [57] is a CNN architecture designed for efficient computation and memory usage, particularly suited for mobile and embedded devices. It employs the concept of group convolutions and channel shuffling to significantly reduce the computational complexity while maintaining high accuracy in image classification tasks. VGG19 [58] is composed of 19 layers, including convolutional layers followed by max-pooling layers, and topped with fully connected layers. The network architecture consists of alternating convolutional layers with small filters and max-pooling layers with filters. This structure allows VGG19 to capture complex patterns at different scales in the input images. The last few layers of VGG19 are fully connected layers responsible for high-level reasoning and classification. ResNet18 and ResNet50 [59] are both CNN architectures developed by Microsoft Research as part of the ResNet (residual network) family. They are designed to address the vanishing gradient problem encountered in very deep neural networks by introducing skip connections or shortcuts. The main characteristics of the examined CNNs are summarized in Table 1.
Since a detailed description of the finetuning process of these architectures is not given in the original publication [4], we briefly summarize here our applied procedure. First, fully-connected layers were removed from the pretrained CNN models because these are specific to the original task (ImageNet classification). Second, we added new fully connected layers on top of the convolutional base. These layers will be specific to our new task. Specifically, the number of nodes in the final fully connected layer should match the number of classes in our dataset.
3.2. Detected Data Leakage
The authors [4] have not dedicated a separate section or paragraph on dataset partition in their paper. However, they claimed an exceptional accuracy rate of 99%. In the experimental results, we empirically corroborate that the authors divided the dataset in their methodology without respect to individual subjects, a crucial oversight that undermines the integrity of the study. By neglecting to divide the dataset with respect to the individuals, the authors inadvertently introduced a significant source of data leakage. The absence of subject-based data partitioning in the training, validation, and test sets raises concerns regarding the potential overlap of CSI images originating from the same individual across these sets. This oversight introduces a significant risk of data leakage, as CSI samples from a given individual may inadvertently influence the training process and subsequently inflate the reported accuracy metrics. Without a systematic approach to ensure the exclusivity of subjects across the dataset partitions, the study’s findings are susceptible to biases and inaccuracies, undermining the credibility of the proposed methodology. For clarity, Figure 3 and Figure 4 illustrate the differences between dataset partitioning with respect to and without respect to humans, respectively. As depicted in Figure 3, partitioning with respect to humans ensures the exclusivity of subjects across training, validation, and test sets, avoiding the risk of data leakage. In contrast, Figure 4 demonstrates the dataset division without respect to humans. In this incorrect strategy, CSI images are generated first (the exact number of CSI channels available in a Wi-Fi system may vary depending on the specific implementation and hardware capabilities [61]. Subsequently, these CSI images are randomly divided into training, validation, and test sets. As a consequence, samples from one human can very easily occur both in the training and test sets leading to data leakage. Why does dataset partition without respect to humans cause data leakage? Let us consider a scenario where we develop a product for WiFi-based human action recognition and intend to sell it to a market in another country. In this new market, the demographic composition, cultural norms, and individual identities will undoubtedly differ from those in our local setting. If our model is trained without proper consideration for subject-based partitioning, it may inadvertently learn patterns specific to the individuals in our local dataset. Consequently, when deployed in a different country where the population characteristics vary, the model’s performance may degrade significantly. By partitioning the data with respect to humans, we ensure that the model learns generalizable patterns of human actions rather than relying on idiosyncratic features of specific individuals. This approach not only enhances the model’s adaptability to diverse populations but also promotes its robustness and reliability across different cultural contexts.
Empirical analysis confirms that the authors generated CSI images first, and subsequently partitioned the dataset without respect to human identities. Through meticulous examination of the dataset, it became evident that CSI images originating from the same individual were distributed across the training, validation, and test sets without proper isolation. This critical oversight in the data partitioning process introduces a significant risk of data leakage, as features specific to individual subjects may inadvertently influence model training and evaluation, leading to inflated performance metrics. By empirically corroborating the lack of subject-based data partitioning, we underscore the necessity of adhering to rigorous data management protocols to ensure the integrity and reliability of machine learning studies.
4. Results
4.1. Data Details and Training
In the ImgFi study [4], the authors used three publicly available datasets, i.e., WiAR [62], SAR [63], Widar3.0 [64], and an own database to test the proposed CNN-based solutions. Since WiAR [62] and Widar3.0 [64] are the largest among the used publicly available datasets, we opted to use WiAR and Widar3.0 for the demonstration of the detected data leakage issue. Table 2 gives information on the action labels and the dataset size.
In Table 3, the parameter setting, which was used in the training of ImgFi and in the finetuning of the pre-trained CNN models, can be seen. Unlike [4], the WiFi CSI-based HAR dataset was divided with respect to the human subjects into training, validation, and test sets. As a consequence, CSI data originating from the same individual cannot be distributed across the train, validation, and test sets. As already mentioned, we empirically corroborate that the data split was carried out without respect to the human subjects in [4] which results in exceptionally high classification accuracy.
4.2. Evaluation Metrics
In our analysis to ensure correctness, we have used exactly the same evaluation metrics as proposed in [4]. Since the applied dataset was balanced, accuracy, precision, recall, and F1 were determined for each human action label and subsequently their arithmetic mean was given. In the examined classification problem, accuracy for each category can be expressed using the terms true positive (TP), true negative (TN), false positive (FP), and false negative (FN) as
(1)
Precision and recall for each category can be given as
(2)
(3)
Similarly, F1 for each category can be expressed as
(4)
If the number of categories is denoted by N, accuracy, precision, and F1 for all categories can be determined as follows:
(5)
(6)
(7)
(8)
4.3. Numerical Results
The numerical results are illustrated in Table 4 and Table 5. Specifically, the results reported in [4] and the results of the two dataset partition protocols—without respect to and with respect to humans—are compared. Our findings revealed that while the results of ImgFi’s retraining without respect to humans are slightly lower than the reported 99.9% precision, there are potential factors that may contribute to this discrepancy. One possible explanation could be the application of a data augmentation technique in [4] which was not reported in the paper. Additionally, it is important to note that our retraining process aimed to replicate the methodology of the original study to the best of our ability, but minor variations in the implementation may have influenced the final performance metrics. Despite the slight disparity in results, our analysis underscores the significance of implementing rigorous data management practices, such as subject-based partitioning, to ensure the integrity and reliability of machine learning studies in the domain of WiFi CSI-based human action recognition. Upon retraining the model with data partitioning carried out with respect to humans, we observed a notable decrease in the performance metrics compared to the reported results in the IEEE Sensors Journal paper [4]. Specifically, our retraining yielded a precision of 23.4%, recall of 22.8%, and F1 score of 22.0% on WiAR and precision of 47.4%, recall of 45.6%, and F1 score of 43.9% on Widar3.0, respectively. These findings highlight the critical role of proper data partitioning in ensuring the integrity and reliability of model evaluation. While the decrease in performance may be discouraging, it underscores the necessity of adhering to rigorous data management practices to obtain more accurate and generalizable results.
The training curves of ResNet18’s retraining without respect to humans and with respect to humans are depicted in Figure 5 and Figure 6, respectively. These figures allow interesting conclusions. When the data split is conducted without respect to humans, we observe a strong correlation between training and validation accuracy, with validation accuracy closely tracking the trends of training accuracy albeit with a small difference. This consistency suggests that the model is effectively learning from the training data and generalizing well to unseen validation data. Conversely, when the data split is performed with respect to humans, a notable disparity emerges between training and validation accuracy. Despite a consistent increase in training accuracy, validation accuracy exhibits saturation, indicating that the model’s performance fails to generalize to unseen data effectively. This discrepancy suggests that the model may be overfitting to the training data when subjected to subject-based data partitioning, emphasizing the importance of proper data management practices to ensure model robustness and generalizability. The stark contrast in the behavior of training and validation accuracy highlights the critical role of data partitioning methodology in evaluating model performance accurately. These findings underscore the necessity of subject-based data partitioning to obtain reliable estimates of model generalization and mitigate the risks of overfitting.
5. Discussion
The empirical corroboration of data leakage in the ImgFi [4] WiFi CSI-based human action recognition study underscores the importance of rigorous data management practices in machine learning research [66,67]. The presence of data leakage compromises the validity and generalizability of the study’s findings. By allowing CSI images from the same individual to influence both the training and evaluation processes, the reported accuracy metrics are likely inflated, leading to an overestimation of the model’s performance. Consequently, the proposed approach may not accurately generalize to unseen data or real-world scenarios, undermining its practical utility.
Our recommendations for avoiding data leakage in WiFi CSI-based HAR are the following:
Subject-based data partitioning: Future studies should prioritize subject-based data partitioning to ensure the exclusivity of individuals across training, validation, and test sets. By maintaining strict isolation of subjects, researchers can mitigate the risk of data leakage and obtain more reliable performance estimates.
Transparent reporting: Researchers should provide detailed documentation of data partitioning procedures to facilitate reproducibility and scrutiny of the study’s methodology. Transparent reporting enables reviewers and readers to identify potential methodological flaws, such as data leakage, and assess the reliability of the reported results.
Publishing training curves enables other researchers to replicate and validate the presented results more effectively. By providing detailed insights into the model’s training process, a researcher can facilitate transparency and reproducibility in the field, contributing to the advancement of knowledge and best practices.
Reviewers play a crucial role in ensuring the integrity and reliability of published research, including identifying and addressing potential data leakage issues. Reviewers should carefully scrutinize the methodology section to ascertain how the data were partitioned for training, validation, and testing. Specifically, reviewers should look for explicit descriptions of how subjects or samples were allocated to each partition and assess whether the partitioning strategy adequately prevents information leakage between sets.
Publishers of publicly available databases for machine learning should consider to provide clear and comprehensive guidance on appropriate data partitioning methodologies to assist researchers in conducting robust experiments and accurately evaluating model performance. By offering recommendations for the correct train/validation/test split procedures, publishers can empower researchers to adopt best practices in data management and mitigate the risk of common pitfalls, such as data leakage. This guidance should include detailed instructions on subject-based partitioning, cross-validation techniques, and transparent reporting of data preprocessing steps to foster transparency and reproducibility in machine learning research.
Addressing data leakage is crucial for ensuring the integrity and reliability of machine learning studies. By implementing rigorous data management practices, such as subject-based partitioning and transparent reporting, researchers can enhance the validity and generalizability of their findings.
6. Conclusions
In this study, we conducted a critical analysis of WiFi Channel State Information (CSI) based human action recognition using Convolutional Neural Networks (CNNs), with a specific focus on addressing the issue of data leakage. Through empirical investigation and meticulous scrutiny of the methodology, experimental setup, and results presented in a notable IEEE Sensors Journal paper, we identified instances of data leakage that undermine the integrity and reliability of the reported findings. Our analysis revealed that the authors failed to implement subject-based data partitioning, leading to the inadvertent inclusion of CSI images from the same individual across the training, validation, and test sets. This critical oversight introduced a significant risk of data leakage, whereby information from the test set leaked into the training process, resulting in inflated accuracy metrics and misleading conclusions.
In conclusion, addressing data leakage is essential for advancing the field of machine learning and ensuring the reliability and generalizability of research findings. By identifying and rectifying methodological pitfalls, we can strengthen the foundations of machine learning research and pave the way for more robust and impactful applications in diverse domains. The overly optimistic performance metrics reported in studies affected by data leakage may inadvertently create a false sense of accomplishment, discouraging other researchers from critically examining the underlying methodologies and contributing to a stagnation in the advancement of the field.
Not applicable.
Not applicable.
The data used in this study is available for download at
We would like to express our sincere gratitude to our colleagues Gábor Sörös, Ferenc Kovács, and Chung Shue Chen for their invaluable feedback and constructive comments on the manuscript. Their insights and suggestions have greatly contributed to the clarity and rigor of this work. We are also deeply grateful to our manager, Lóránt Farkas, for his unwavering support and encouragement throughout the research project. We extend our heartfelt appreciation to Krisztián Varga for his invaluable assistance and expertise in GPU computing. His guidance and support have been instrumental in optimizing our computational workflows and accelerating the progress of this research project. We would like to express our heartfelt gratitude to the entire team of Nokia Bell Labs, Budapest for fostering an environment of collaboration, support, and positivity throughout the duration of this project. Finally, we thank the anonymous reviewers and the academic editor for their careful reading of our manuscript and their many insightful comments and suggestions.
The author declares no conflicts of interest.
The following abbreviations are used in this manuscript:
CNN | convolutional neural network |
CSI | channel state information |
GADF | Gramian angular difference field |
GASF | Gramian angular summation field |
GPU | graphics processing unit |
HAR | human action recognition |
IEEE | Institute of Electrical and Electronics Engineers |
MTF | Markov transition field |
ReLU | rectified linear unit |
ResNet | residual network |
RP | recurrence plot |
SDR | software-defined radio |
SNR | signal-to-noise ratio |
SVM | support vector machine |
VGG | visual geometry group |
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. Structure of ImgFi model [4] which consists of four convolutional layers and a fully-connected layer which is the final classifier in this structure. Further, batch normalization and max pooling were applied between every two convolutional layers.
Figure 3. Illustration of dataset split with respect to humans. Humans are exclusively allocated to either training, validation, or test sets, ensuring independence and preventing data leakage between partitions.
Figure 4. Illustration of dataset split without respect to humans. Humans are not allocated exclusively to training, validation, or test sets, but CSI images are generated first and then these CSI images are randomly divided into training, validation, and test sets. As a consequence, samples from one human can very easily occur both in the training and test sets leading to data leakage.
Figure 5. Retraining of ResNet18 without respect to humans. In the upper figure, the training accuracy is depicted in blue, while the validation accuracy is shown in black. In the bottom figure, training loss is shown in red and validation loss is illustrated in black.
Figure 6. Retraining of ResNet18 with respect to humans. In the upper figure, the training accuracy is depicted in blue, while the validation accuracy is shown in black. In the bottom figure, training loss is shown in red and validation loss is illustrated in black.
On ImageNet [
CNN | Depth | Size | Parameters (Millions) |
---|---|---|---|
ShuffleNet [ | 50 | 5.4 MB | 1.4 |
VGG19 [ | 19 | 535 MB | 144 |
ResNet18 [ | 18 | 44 MB | 11.7 |
ResNet50 [ | 50 | 96 MB | 25.6 |
Dataset’s details.
Dataset Name | Action Labels | Dataset Size |
---|---|---|
WiAR [ | two hands wave, high throw, horizontal arm wave, draw tick, toss paper, walk, side kick, bend, forward kick, drink water, sit down, draw X, phone call, hand clap, high arm wave, squat | 62,415 images |
Widar3.0 [ | push, sweep, clap, slide, draw-Z, draw-N | 80,000 images |
Parameter setting.
Parameter | Value |
---|---|
Dataset partitioning | Training/validation/test (0.6/0.2/0.2). Split is carried out w.r.t. humans. |
Loss function | Cross-entropy |
Optimizer | Adam [ |
Learning rate | 0.001 |
Decay rate | 0.8 |
Batch size | 128 |
Epochs | 20 |
Comparison of percentages on WiAR.
Reported in [ | Retrained | Retrained | |||||||
---|---|---|---|---|---|---|---|---|---|
Architecture | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 |
ShuffleNet | 99.4 | 99.4 | 99.4 | 94.5 | 94.5 | 94.3 | 20.4 | 20.0 | 19.9 |
VGG19 | 99.8 | 99.7 | 99.7 | 94.6 | 94.4 | 94.4 | 20.5 | 19.9 | 19.9 |
ResNet18 | 99.8 | 99.8 | 99.7 | 88.1 | 88.0 | 88.0 | 15.3 | 14.7 | 14.6 |
ResNet50 | 99.8 | 99.8 | 99.8 | 94.5 | 94.5 | 94.0 | 20.7 | 19.8 | 19.8 |
ImgFi | 99.9 | 99.8 | 99.8 | 99.0 | 99.0 | 98.9 | 23.4 | 22.8 | 22.0 |
Comparison of percentages on Widar3.0.
Reported in [ | Retrained | Retrained | |||||||
---|---|---|---|---|---|---|---|---|---|
Architecture | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 |
ShuffleNet | 99.3 | 99.3 | 99.3 | 99.1 | 99.1 | 99.1 | 40.7 | 39.6 | 39.5 |
VGG19 | 99.8 | 99.7 | 99.6 | 99.7 | 99.7 | 99.6 | 41.0 | 39.5 | 39.8 |
ResNet18 | 99.8 | 99.8 | 99.7 | 97.9 | 97.9 | 97.9 | 30.3 | 29.3 | 29.2 |
ResNet50 | 99.8 | 99.8 | 99.8 | 99.3 | 99.2 | 99.2 | 41.4 | 39.6 | 39.4 |
ImgFi | 99.8 | 99.8 | 99.8 | 99.5 | 99.5 | 99.5 | 47.4 | 45.6 | 43.9 |
References
1. Khan, U.M.; Kabir, Z.; Hassan, S.A. Wireless health monitoring using passive WiFi sensing. Proceedings of the 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC); Valencia, Spain, 26–30 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1771-1776.
2. Sruthy, S.; George, S.N. WiFi enabled home security surveillance system using Raspberry Pi and IoT module. Proceedings of the 2017 IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES); Kollam, India, 8–10 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1-6.
3. Zhang, R.; Jiang, C.; Wu, S.; Zhou, Q.; Jing, X.; Mu, J. Wi-Fi sensing for joint gesture recognition and human identification from few samples in human–computer interaction. IEEE J. Sel. Areas Commun.; 2022; 40, pp. 2193-2205. [DOI: https://dx.doi.org/10.1109/JSAC.2022.3155526]
4. Zhang, C.; Jiao, W. Imgfi: A high accuracy and lightweight human activity recognition framework using csi image. IEEE Sens. J.; 2023; 23, pp. 21966-21977. [DOI: https://dx.doi.org/10.1109/JSEN.2023.3296445]
5. Sun, Z.; Ke, Q.; Rahmani, H.; Bennamoun, M.; Wang, G.; Liu, J. Human action recognition from various data modalities: A review. IEEE Trans. Pattern Anal. Mach. Intell.; 2022; 45, pp. 3200-3225. [DOI: https://dx.doi.org/10.1109/TPAMI.2022.3183112]
6. Hao, Z.; Zhang, Q.; Ezquierdo, E.; Sang, N. Human action recognition by fast dense trajectories. Proceedings of the 21st ACM International Conference on Multimedia; Barcelona, Spain, 21–25 October 2013; pp. 377-380.
7. Du, Y.; Wang, W.; Wang, L. Hierarchical recurrent neural network for skeleton based action recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Boston, MA, USA, 7–12 June 2015; pp. 1110-1118.
8. Sanchez-Caballero, A.; de López-Diz, S.; Fuentes-Jimenez, D.; Losada-Gutiérrez, C.; Marrón-Romera, M.; Casillas-Perez, D.; Sarker, M.I. 3dfcnn: Real-time action recognition using 3d deep neural networks with raw depth information. Multimed. Tools Appl.; 2022; 81, pp. 24119-24143. [DOI: https://dx.doi.org/10.1007/s11042-022-12091-z]
9. Akula, A.; Shah, A.K.; Ghosh, R. Deep learning approach for human action recognition in infrared images. Cogn. Syst. Res.; 2018; 50, pp. 146-154. [DOI: https://dx.doi.org/10.1016/j.cogsys.2018.04.002]
10. Munaro, M.; Ballin, G.; Michieletto, S.; Menegatti, E. 3D flow estimation for human action recognition from colored point clouds. Biol. Inspired Cogn. Archit.; 2013; 5, pp. 42-51. [DOI: https://dx.doi.org/10.1016/j.bica.2013.05.008]
11. Huang, C. Event-based action recognition using timestamp image encoding network. arXiv; 2020; arXiv: 2009.13049
12. Gao, R.; Oh, T.H.; Grauman, K.; Torresani, L. Listen to look: Action recognition by previewing audio. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Seattle, WA, USA, 14–19 June 2020; pp. 10457-10467.
13. Micucci, D.; Mobilio, M.; Napoletano, P. Unimib shar: A dataset for human activity recognition using acceleration data from smartphones. Appl. Sci.; 2017; 7, 1101. [DOI: https://dx.doi.org/10.3390/app7101101]
14. Hernangómez, R.; Santra, A.; Stańczak, S. Human activity classification with frequency modulated continuous wave radar using deep convolutional neural networks. Proceedings of the 2019 International Radar Conference (RADAR); Toulon, France, 23–27 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1-6.
15. Wang, Y.; Liu, J.; Chen, Y.; Gruteser, M.; Yang, J.; Liu, H. E-eyes: Device-free location-oriented activity identification using fine-grained wifi signatures. Proceedings of the 20th Annual International Conference on Mobile Computing and Networking; Maui, HI, USA, 7–11 September 2014; pp. 617-628.
16. Dawar, N.; Kehtarnavaz, N. A convolutional neural network-based sensor fusion system for monitoring transition movements in healthcare applications. Proceedings of the 2018 IEEE 14th International Conference on Control and Automation (ICCA); Anchorage, AK, USA, 12–15 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 482-485.
17. Khaire, P.; Imran, J.; Kumar, P. Human activity recognition by fusion of rgb, depth, and skeletal data. Proceedings of the 2nd International Conference on Computer Vision & Image Processing: CVIP 2017; Roorkee, India, 9–12 September 2017; Springer: Berlin/Heidelberg, Germany, 2018; Volume 1, pp. 409-421.
18. Ardianto, S.; Hang, H.M. Multi-view and multi-modal action recognition with learned fusion. Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC); Honolulu, HI, USA, 12–15 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1601-1604.
19. Yu, J.; Cheng, Y.; Zhao, R.W.; Feng, R.; Zhang, Y. Mm-pyramid: Multimodal pyramid attentional network for audio-visual event localization and video parsing. Proceedings of the 30th ACM International Conference on Multimedia; Lisboa, Portugal, 10–14 October 2022; pp. 6241-6249.
20. Xie, H.; Gao, F.; Jin, S. An overview of low-rank channel estimation for massive MIMO systems. IEEE Access; 2016; 4, pp. 7313-7321. [DOI: https://dx.doi.org/10.1109/ACCESS.2016.2623772]
21. Wu, K.; Xiao, J.; Yi, Y.; Chen, D.; Luo, X.; Ni, L.M. CSI-based indoor localization. IEEE Trans. Parallel Distrib. Syst.; 2012; 24, pp. 1300-1309. [DOI: https://dx.doi.org/10.1109/TPDS.2012.214]
22. Ahmed, H.F.T.; Ahmad, H.; Aravind, C. Device free human gesture recognition using Wi-Fi CSI: A survey. Eng. Appl. Artif. Intell.; 2020; 87, 103281. [DOI: https://dx.doi.org/10.1016/j.engappai.2019.103281]
23. Gao, Q.; Wang, J.; Ma, X.; Feng, X.; Wang, H. CSI-based device-free wireless localization and activity recognition using radio image features. IEEE Trans. Veh. Technol.; 2017; 66, pp. 10346-10356. [DOI: https://dx.doi.org/10.1109/TVT.2017.2737553]
24. De Kerret, P.; Gesbert, D. CSI sharing strategies for transmitter cooperation in wireless networks. IEEE Wirel. Commun.; 2013; 20, pp. 43-49. [DOI: https://dx.doi.org/10.1109/MWC.2013.6472198]
25. Wang, Y.; Wu, K.; Ni, L.M. Wifall: Device-free fall detection by wireless networks. IEEE Trans. Mob. Comput.; 2016; 16, pp. 581-594. [DOI: https://dx.doi.org/10.1109/TMC.2016.2557792]
26. Kecman, V. Support vector machines—An introduction. Support Vector Machines: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1-47.
27. Pu, Q.; Gupta, S.; Gollakota, S.; Patel, S. Whole-home gesture recognition using wireless signals. Proceedings of the 19th Annual International Conference on Mobile Computing & Networking; Miami, FL, USA, 30 September–4 October 2013; pp. 27-38.
28. Adib, F.; Kabelac, Z.; Katabi, D.; Miller, R.C. 3D tracking via body radio reflections. Proceedings of the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14); Seattle, WA, USA, 2–4 April 2014; pp. 317-329.
29. Adib, F.; Katabi, D. See through walls with WiFi!. Proceedings of the ACM SIGCOMM 2013 Conference on SIGCOMM; Hong Kong, China, 12–16 August 2013; pp. 75-86.
30. Müller, M. Dynamic time warping. Information Retrieval for Music and Motion; Springer: Heidelberg, Germany, 2007; pp. 69-84.
31. Ling, H.; Okada, K. An efficient earth mover’s distance algorithm for robust histogram comparison. IEEE Trans. Pattern Anal. Mach. Intell.; 2007; 29, pp. 840-853. [DOI: https://dx.doi.org/10.1109/TPAMI.2007.1058] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17356203]
32. Halperin, D.; Hu, W.; Sheth, A.; Wetherall, D. Tool release: Gathering 802.11 n traces with channel state information. ACM SIGCOMM Comput. Commun. Rev.; 2011; 41, 53. [DOI: https://dx.doi.org/10.1145/1925861.1925870]
33. Van Nee, R.; Jones, V.; Awater, G.; Van Zelst, A.; Gardner, J.; Steele, G. The 802.11 n MIMO-OFDM standard for wireless LAN and beyond. Wirel. Pers. Commun.; 2006; 37, pp. 445-453. [DOI: https://dx.doi.org/10.1007/s11277-006-9073-2]
34. Xie, Y.; Li, Z.; Li, M. Precise power delay profiling with commodity WiFi. Proceedings of the 21st Annual International Conference on Mobile Computing and Networking; Paris, France, 7–11 September 2015; pp. 53-64.
35. Tsakalaki, E.; Schäfer, J. On application of the correlation vectors subspace method for 2-dimensional angle-delay estimation in multipath ofdm channels. Proceedings of the 2018 14th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob); Limassol, Cyprus, 15–17 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1-8.
36. Chen, Z.; Zhang, L.; Jiang, C.; Cao, Z.; Cui, W. WiFi CSI based passive human activity recognition using attention based BLSTM. IEEE Trans. Mob. Comput.; 2018; 18, pp. 2714-2724. [DOI: https://dx.doi.org/10.1109/TMC.2018.2878233]
37. Guo, L.; Zhang, H.; Wang, C.; Guo, W.; Diao, G.; Lu, B.; Lin, C.; Wang, L. Towards CSI-based diversity activity recognition via LSTM-CNN encoder-decoder neural network. Neurocomputing; 2021; 444, pp. 260-273. [DOI: https://dx.doi.org/10.1016/j.neucom.2020.02.137]
38. Zhang, W.; Zhou, S.; Peng, D.; Yang, L.; Li, F.; Yin, H. Understanding and modeling of WiFi signal-based indoor privacy protection. IEEE Internet Things J.; 2020; 8, pp. 2000-2010. [DOI: https://dx.doi.org/10.1109/JIOT.2020.3015994]
39. Jiang, W.; Miao, C.; Ma, F.; Yao, S.; Wang, Y.; Yuan, Y.; Xue, H.; Song, C.; Ma, X.; Koutsonikolas, D. et al. Towards environment independent device free human activity recognition. Proceedings of the 24th Annual International Conference on Mobile Computing and Networking; New Delhi, India, 29 October–2 November 2018; pp. 289-304.
40. Zhu, A.; Tang, Z.; Wang, Z.; Zhou, Y.; Chen, S.; Hu, F.; Li, Y. Wi-ATCN: Attentional temporal convolutional network for human action prediction using WiFi channel state information. IEEE J. Sel. Top. Signal Process.; 2022; 16, pp. 804-816. [DOI: https://dx.doi.org/10.1109/JSTSP.2022.3163858]
41. Domnik, J.; Holland, A. On data leakage prevention and machine learning. Proceedings of the 35th Bled eConference Digital Restructuring and Human (Re) Action; Bled, Slovenia, 26–29 June 2022; 695.
42. Samala, R.K.; Chan, H.P.; Hadjiiski, L.; Koneru, S. Hazards of data leakage in machine learning: A study on classification of breast cancer using deep neural networks. Medical Imaging 2020: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2020; Volume 11314, pp. 279-284.
43. Chiavegatto Filho, A.; Batista, A.F.D.M.; Dos Santos, H.G. Data leakage in health outcomes prediction with machine learning. comment on “prediction of incident hypertension within the next year: Prospective study using statewide electronic health records and machine learning”. J. Med Internet Res.; 2021; 23, e10969. [DOI: https://dx.doi.org/10.2196/10969]
44. Rosenblatt, M.; Tejavibulya, L.; Jiang, R.; Noble, S.; Scheinost, D. Data leakage inflates prediction performance in connectome-based machine learning models. Nat. Commun.; 2024; 15, 1829. [DOI: https://dx.doi.org/10.1038/s41467-024-46150-w] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38418819]
45. Hannun, A.; Guo, C.; van der Maaten, L. Measuring data leakage in machine-learning models with fisher information. Uncertainty in Artificial Intelligence; PMLR: Cambridge MA, USA, 2021; pp. 760-770.
46. Stock, A.; Gregr, E.J.; Chan, K.M. Data leakage jeopardizes ecological applications of machine learning. Nat. Ecol. Evol.; 2023; 7, pp. 1743-1745. [DOI: https://dx.doi.org/10.1038/s41559-023-02162-1]
47. Yang, M.; Zhu, J.J.; McGaughey, A.; Zheng, S.; Priestley, R.D.; Ren, Z.J. Predicting extraction selectivity of acetic acid in pervaporation by machine learning models with data leakage management. Environ. Sci. Technol.; 2023; 57, pp. 5934-5946. [DOI: https://dx.doi.org/10.1021/acs.est.2c06382]
48. Poldrack, R.A.; Huckins, G.; Varoquaux, G. Establishment of best practices for evidence for prediction: A review. JAMA Psychiatry; 2020; 77, pp. 534-540. [DOI: https://dx.doi.org/10.1001/jamapsychiatry.2019.3671] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31774490]
49. Kapoor, S.; Narayanan, A. Leakage and the reproducibility crisis in machine-learning-based science. Patterns; 2023; 4, 100804. [DOI: https://dx.doi.org/10.1016/j.patter.2023.100804] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37720327]
50. Eckmann, J.P.; Kamphorst, S.O.; Ruelle, D. Recurrence plots of dynamical systems. World Sci. Ser. Nonlinear Sci. Ser. A; 1995; 16, pp. 441-446.
51. Wang, Z.; Oates, T. Imaging time-series to improve classification and imputation. arXiv; 2015; arXiv: 1506.00327
52. Jiang, J.R.; Yen, C.T. Markov transition field and convolutional long short-term memory neural network for manufacturing quality prediction. Proceedings of the 2020 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan); Taoyuan, Taiwan, 28–30 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1-2.
53. Sejdić, E.; Djurović, I.; Jiang, J. Time–frequency feature representation using energy concentration: An overview of recent advances. Digit. Signal Process.; 2009; 19, pp. 153-183. [DOI: https://dx.doi.org/10.1016/j.dsp.2007.12.004]
54. Marwan, N.; Romano, M.C.; Thiel, M.; Kurths, J. Recurrence plots for the analysis of complex systems. Phys. Rep.; 2007; 438, pp. 237-329. [DOI: https://dx.doi.org/10.1016/j.physrep.2006.11.001]
55. Ketkar, N.; Moolayil, J.; Ketkar, N.; Moolayil, J. Introduction to pytorch. Deep Learning with Python: Learn Best Practices of Deep Learning Models with PyTorch; Springer: Berlin/Heidelberg, Germany, 2021; pp. 27-91.
56. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the International Conference on Machine Learning; Lille, France, 6–11 July 2015; PMLR: Cambridge MA, USA, 2015; pp. 448-456.
57. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Salt Lake City, UT, USA, 18–22 June 2018; pp. 6848-6856.
58. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv; 2014; arXiv: 1409.1556
59. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA, 27–30 June 2016; pp. 770-778.
60. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition; Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 248-255.
61. Li, M.; Meng, Y.; Liu, J.; Zhu, H.; Liang, X.; Liu, Y.; Ruan, N. When CSI meets public WiFi: Inferring your mobile phone password via WiFi signals. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security; Vienna, Austria, 24–28 October 2016; pp. 1068-1079.
62. Guo, L.; Wang, L.; Lin, C.; Liu, J.; Lu, B.; Fang, J.; Liu, Z.; Shan, Z.; Yang, J.; Guo, S. Wiar: A public dataset for wifi-based activity recognition. IEEE Access; 2019; 7, pp. 154935-154945. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2947024]
63. Brinke, J.K.; Meratnia, N. Scaling activity recognition using channel state information through convolutional neural networks and transfer learning. Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things; New York, NY, USA, 10–13 November 2019; pp. 56-62.
64. Zhang, Y.; Zheng, Y.; Qian, K.; Zhang, G.; Liu, Y.; Wu, C.; Yang, Z. Widar3.0: Zero-effort cross-domain gesture recognition with wi-fi. IEEE Trans. Pattern Anal. Mach. Intell.; 2021; 44, pp. 8671-8688. [DOI: https://dx.doi.org/10.1109/TPAMI.2021.3105387] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34406937]
65. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv; 2014; arXiv: 1412.6980
66. Götz-Hahn, F.; Hosu, V.; Lin, H.; Saupe, D. KonVid-150k: A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild. IEEE Access; 2021; 9, pp. 72139-72160. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3077642]
67. Saupe, D.; Hahn, F.; Hosu, V.; Zingman, I.; Rana, M.; Li, S. Crowd workers proven useful: A comparative study of subjective video quality assessment. Proceedings of the QoMEX 2016: 8th International Conference on Quality of Multimedia Experience; Lisbon, Portugal, 6–8 June 2016.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
WiFi Channel State Information (CSI)-based human action recognition using convolutional neural networks (CNNs) has emerged as a promising approach for non-intrusive activity monitoring. However, the integrity and reliability of the reported performance metrics are susceptible to data leakage, wherein information from the test set inadvertently influences the training process, leading to inflated accuracy rates. In this paper, we conduct a critical analysis of a notable IEEE Sensors Journal study on WiFi CSI-based human action recognition, uncovering instances of data leakage resulting from the absence of subject-based data partitioning. Empirical investigation corroborates the lack of exclusivity of individuals across dataset partitions, underscoring the importance of rigorous data management practices. Furthermore, we demonstrate that employing data partitioning with respect to humans results in significantly lower precision rates than the reported 99.9% precision, highlighting the exaggerated nature of the original findings. Such inflated results could potentially discourage other researchers and impede progress in the field by fostering a sense of complacency.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer