Introduction
Cough is consistently ranked as one of the most common reasons for seeking medical attention. 1 , 2 Acute cough frequently indicates new-onset and potentially contagious respiratory infection, 3 while chronic cough can be an important cause of discomfort and disability affecting quality of life. 4 , 5 In current medical practice, objective cough assessment can only occur during face to face interaction with the patient in the context of in- or outpatient visits, effectively making the symptom invisible to the health care provider outside the medical settings. To assess cough in ambulatory settings, health care providers rely on questionnaires and patient-reported outcomes, which are subject to patients’ self-perception, cough tolerance and recall bias. 6 , 7 While different systems for automated cough detection have been developed in the last decade, 8 , 9 they depend on wearable microphones, or spirometers, 10 and their adoption is limited by cost, portability and privacy concerns given the need for continuous sound recording. Recent advances in artificial intelligence (AI) allow the monitoring of cough in a non-obtrusive way using smartphones or other wearable digital devices. 6 , 11 – 14 Unobtrusive and privacy preserving passive cough monitors could revolutionize clinical practice and research in the field of respiratory diseases.
Longitudinal monitoring of cough is particularly attractive for the evaluation of disease progression, or treatment response, as well as in clinical trials where trends in cough rates is an outcome of interest. Longitudinal cough monitoring also opens the door to population-wide capture of cough-signals as a surrogate marker of respiratory diseases epidemiology. 14
Evaluating cough and its patterns with limited recording periods (e.g., 24 h) can be misleading, in particular, if only small changes in cough frequency are captured over the limited 24 h recording and in cases that have high variance of cough counts. 12 However, the nature and volume of data generated with protracted monitoring raises new challenges in technology validation. A central challenge in this work is establishing a gold standard against which automated devices’ performance can be evaluated.
In this study, we present the accuracy of Hyfe Cough Tracker app (henceforth referred to as Hyfe), a smartphone-based automated cough monitor that uses a convolutional neural network (CNN) to differentiate coughs from other explosive sounds. 13 In this “in-vitro” performance evaluation, we use solicited sounds in a controlled acoustic environment as the first step towards clinical validation. We also propose a standard operating procedure (SOP) to appropriately label cough sounds from continuous audio recording.
Methods
Automated cough detection system
Hyfe is a software application for patient use, freely available for use on Android and iOS smartphones. It continuously monitors ambient sound and employs a two-step process to (1) detect explosive cough-like sounds, record a 0.5 second sound snippet which is sent to a cloud server where (2) a CNN assigns a cough prediction score (0 to 1) to each sound. Hyfe’s CNN model, at the time of this analysis, was trained on more than 200M real-world cough and cough-like samples, collected from multiple countries and multiple mic-enabled devices. For this study, a minimal score threshold of 0.85 was used for classifying a peak sound as a cough. Within this study, Hyfe (Version acl 1.24.4) was installed on smartphones (Motorola G30, Motorola, Inc, Chicago, IL, USA) running Android operating system version 11 (Google, LLC, Mountainview, CA, USA).
To assess the accuracy of Hyfe, continuous recording using a MP3 recorder (Sony ICD-PX470, Sony, Tokyo, Japan) and manual labeling of cough by medically trained listeners was used as the gold standard.
Study design
This performance evaluation study was conducted at the University of Navarra, Spain, between September to November 2021 and was nested in a larger cohort study ( Clinicaltrials.org NCT04762693). 12 Both the main and the nested study received approval by the Medical Research Ethics Committee of the chartered community of Navarra (PI_2020/107). Students and staff from the university of Navarra were invited to participate via email. All participants were aged 18 or older and signed informed consent. Baseline respiratory symptoms were not considered for inclusion. Participants were asked to produce a series of solicited sounds by reading a provided script, while being recorded with an MP3 recorder and monitored by Hyfe on two identical smartphones. The phones and recorder were placed on a table at approximately 50 cm from the participants, with microphones oriented towards them.
A pre-generated computer script instructed participants to produce a series of 46 sounds, of which 18 were coughs, the rest consisted of solicited sneezes, throat clearings, spoken letters or words in the same 10 minutes. Participants were instructed to cough once every time they were prompted by the script to do so. In total for each participant, the script included instructions to cough 20 (18 as isolated coughs and 2 coughs in the literary text) times, sneeze 10 times, clear their throat 5 times and produce 15 sounds (explosive words, for example, “paella” and numbers as “93”). Some sounds were requested while reading out loud a literary text (in Spanish). Outside the reading, solicited sounds were separated from one another by at least five seconds of ambient silence. There were five different versions of the script, each one presenting a different sequence of instructions, and the version shown to each participant was randomly selected using a computer-generated sequence at the beginning of each session. Recording sessions occurred in a quiet room and lasted approximately 10 minutes. The sampling rate was 44.1 Hz and the files are 16-bit. The time at which individual sounds were produced was automatically recorded in every session. Sound intensity levels in the room were also monitored using a UNI-T mini sound level meter. The room was not acoustically insulated.
Three medically trained researchers listened to individual recording sessions using Audacity (Audacity team (2021). Audacity(R): Free Audio Editor and Recorder [Computer application]. Version 3.1.3). 15 Coughs were manually annotated using digital audio recordings and visual audio wave representation. It was previously shown that ambulatory cough counts from audio recordings have great agreement with patient video recordings, and that digital audio recordings could hence be considered as the gold standard in validating novel cough monitoring tools. 16 , 17 Each sound was labeled using a 4-tier system defined in the SOP, which was developed for cough annotation in continuous audio recordings. In brief, sounds were classified as 0 = definitely not a cough, 1 = disputable cough (i.e., someone could consider the sound as a cough), 2 = definite cough but distant/muffled/obstructed, 3 = definite cough. Labels were made using Audacity and exported as text files for analysis. Labellers were blinded to the classification made by Hyfe and other listeners but knew a participant’s age and gender. Sounds labeled unanimously as a number 3 (“definite cough”) by all the human listeners were considered true coughs.
Sample size
We estimated that at least 385 sounds would be required to observe a 90% sensitivity and 85% specificity, with a cough prevalence of 40% (39% of solicited sounds in the script were coughs), a precision of 5%, and a dropout rate of 10%. 18 , 19
Data processing and analysis
Labels created by listeners (in Audacity) and Hyfe detected coughs were firstly manually synchronized to within two seconds (as this was within the silent time of five seconds between the solicited sounds in the automated script) of each other. Synchronization was then carried out for each phone and each session separately by identifying the time offset that would align Hyfe detections with the labels and adjusting the Hyfe detection timestamps accordingly. Offsets were estimated first using a subroutine in R that iteratively tests the offset-error produced by a wide variety of values, then manually reviewing and adjusting those automatic offsets as needed.
For the performance analysis, each recording session was divided into seconds. Seconds in which at least one explosive cough-like sound was labeled by a human listener (categories 1, 2, or 3) were pooled and defined as “cough-like-seconds”. Individual labels, which were annotated by the listeners, occurring within one second of each other were treated as a single label, and included as a single “cough-second”. Similarly, seconds in which only non-cough sounds occurred (category 0), were identified as “non-cough seconds”.
Hyfe detections on each phone were also pooled into cough-seconds using a similar method: all detected explosive sounds occurring within a one-second period were treated as a single detection; if multiple explosive sounds occurred within a cough-second, the highest cough prediction score among all explosive phases was used as the prediction score for the cough-second.
All recording seconds were considered as distinct analysis units. Seconds for which there was disagreement between the three human listeners were excluded from the final analysis. Similarly, 10-minute sessions in which fewer than 10 sounds were unanimously labeled as coughs by human listeners were considered of inadequate quality and excluded ( Figure 1). True positives (TP) assessments were defined as those cough-seconds detected by Hyfe, and unanimously classified as category 3 by all human listeners. False positives (FP) were defined as seconds in which coughs did not actually occur, but were incorrectly detected by Hyfe. A pooled sensitivity and specificity value for each phone was obtained by aggregating the cough- and non-cough seconds labeled and detected by each phone throughout all sessions included in the analysis. The fraction of TP among cough-seconds was calculated (sensitivity), as well as the fraction of FP among all non-cough seconds, which was used to calculate the specificity, using the following formula: 1 - (FP/non-coughs) = Specificity.
Figure 1.
Study flow chart.
Given inter-participant variation in ability to generate coughs and other sounds, the performance characteristics of Hyfe for each combination of phone and session were individually assessed and then used to calculate an average sensitivity and specificity in an exploratory sub-analysis.
All data processing and analysis was performed in R version 4.02 (R Core Team 2020) and the code used is available from GitHub and is archived with Zenodo. 23
This analysis further informed the SOP used by Hyfe to annotate coughs and cough-like sounds (sneezes and throat clears), leading to the most recent version - the 6-tier SOP for cough labeling in continuous audio recordings, which now also instructs to label the complete duration of target sounds.
Because the utility of a cough monitor is not in noting individual coughs but rather in tracking cough rates, we further analyzed these results to look at the overall performance of Hyfe to the human annotated gold standard. We cut the entire observation period for all participants into one-minute segments, then compared the gold standard (the number of coughs during that minute per the human annotator) against the tool (the number of cough detections per Hyfe).
Results
In total, 49 recording sessions with individual participants of approximately 10 minutes each were carried out. Two sessions did not have enough labels or detections to allow adequate timestamp synchronization and were excluded. Ten sessions did not have at least 10 sounds unanimously labeled as coughs and were also excluded, leaving 37 sessions, with 672 unanimously-labeled cough-seconds, and 1,007 non-cough seconds for the final performance evaluation ( Figure 1).
The performance of Hyfe using both phones was similar in the pooled analysis, and is presented in Figure 2. Summary statistics of separate tests on sensitivity and specificity are presented in Table 1, showing the median 0.944 sensitivity and median 1.000 specificity for both phones. In a pooled analysis, Phone 1 yielded a sensitivity of 91.5% (95% CI: 89.2%-93.5%) and a specificity of 99.3% (95%CI: 98.6%-99.7%, Table 2), while phone 2 yielded a sensitivity of 92.55% (95% CI: 90.3%-94.4%) and a specificity of 98.7% (95% CI: 97.8%-99.3%, Table 2). The performance of both phones in individual sessions was also evaluated - the average sensitivity of the system in both phones and through the 37 sessions was 90.8% (SD = 11.6%). Specificity was high in both phones (range 93%-100% for phone 1, and 89%-100% for phone 2), with the mean specificity being 99.1% (SD = 1.9%). Sound levels in the room during the study were never above 110dB.
Figure 2.
Performance of both phones through the 37 studied sessions.
Sensitivity and specificity of Hyfe Cough Tracker assessed using solicited coughs.
Table 1.
Summary statistics on sensitivity and specificity for both phones used in individual sessions.
Minimum | First quartile | Median | Third quartile | Maximum | |
---|---|---|---|---|---|
Sensitivity | |||||
Phone 1 | 0.556 | 0.867 | 0.944 | 1.000 | 1.000 |
Phone 2 | 0.545 | 0.889 | 0.944 | 1.000 | 1.000 |
Specificity | |||||
Phone 1 | 0.929 | 1.000 | 1.000 | 1.000 | 1.000 |
Phone 2 | 0.893 | 0.991 | 1.000 | 1.000 | 1.000 |
Table 2.
Comparative performance of both phones used.
Human labels | |||||||
---|---|---|---|---|---|---|---|
Phone 1 | Phone 2 | ||||||
Cough seconds | Non-cough seconds | Total | Cough seconds | Non-cough seconds | Total | ||
Hyfe’s classification | Cough seconds | 615 | 7 | 622 | 622 | 13 | 635 |
Non-cough seconds | 57 | 1000 | 1057 | 50 | 994 | 1044 | |
Total | 672 | 1007 | 1679 | 672 | 1007 | 1679 |
In three recording sessions, Hyfe had a sensitivity around 55%: sessions 2, 17 and 38 ( Figure 3). These sessions met the quality criteria of more than 10 sounds unanimously classified as coughs. Potential explanations for this performance include the acoustic characteristics of the solicited coughs from these particular participants and the level of background noise. Coughs in session 2 and 17 had uncommon acoustic characteristics, such as biphasic decibel peaks, and different spectrographic features. Session 38 had significantly more background noise than the others. Sensitivity for the Session 20 was not evaluated because this was a patient with refractory chronic cough that generated hundreds of out-of-script, making timestamping impossible. We found the Pearson correlation of Hyfe to the gold standard to be 0.968 ( Figure 4) with an intercept of -3.535 and slope of 1.214 for Phone 1, and intercept of -3.248 and slope of 1.213 for Phone 2 ( Table 3). The linear analysis ( Figure 5) and Bland-Altman plot based on percentage error ( Figure 6) for the agreement of human annotated coughs and Hyfe cough detections are also presented.
Figure 3.
Performance of the Hyfe Cough Tracker in individual recording sessions.
Figure 4.
Correlation between the gold standard (human annotator) on the x-axis and the monitor (Hyfe) on the y-axis.
Points are intentionally jittered by up to 0.3 values so as to provide more visibility on high density areas. The diagonal line (slope = 1, intercept = 0) represents where each point would fall in the hypothetical case of a perfect monitor.
Table 3.
Linear analysis on model parameter estimates for both phones used.
Linear model parameter estimates | |||
---|---|---|---|
Pearson correlation | Intercept | Slope | |
Phone 1 | 0.986 | -3.535 | 1.214 |
Phone 2 | 0.989 | -3.248 | 1.213 |
Figure 5.
Linear analysis plot of Human annotated coughs and Hyfe detected coughs.
Figure 6.
Bland-Altman plot of Human annotated coughs and Percentage error in Hyfe cough detections.
Limitations
The major limitation was that this study of performance evaluation was done in a laboratory “in-vitro” environment, not community or a clinical setting. During this study, phone microphones were oriented towards and phones were placed at 50 cm from the participant, however, these settings would vary in real life clinical scenarios with coughing patients which could have longer distances and obstructing objects in between.
Discussion
The ability to unobtrusively monitor cough has the potential to greatly improve patient care, public health and drug development. The uptake of cough monitoring technologies will be determined by their usability, their clinical performance and the increasing evidence that they can provide actionable information for clinical decision making. Hyfe has advantages over existing cough monitors as it can run in the background of a smartphone and passively monitor coughs for longer than 24h of recordings. Rather than using special equipment and limited time windows for continuous cough monitoring, the use of this novel system improves the efficiency of monitoring and reduces the monitoring costs.
There are many ways to assess cough detectors accuracy. The intrinsic, or analytical, performance of AI-based cough monitors directly results from their algorithm’s sensitivity and specificity for labeling recorded sounds. However, those same monitoring technologies may perform differently when deployed in various clinical settings where the acquisition of such sounds may represent a challenge in the first place, leading to either unrecorded coughs or recorded and misclassified non-cough sounds. We previously reported on the analytical performance of Hyfe. 13 Here we report on its pre-clinical performance using scripted solicited coughs in a controlled environment.
Defining a gold standard for the performance evaluation of passive cough monitors represents a challenge which we addressed with standardized procedures ensuring human listener inter-observer consensus. This process and our results highlight three important issues related to evaluating cough monitors. Firstly, it is critical to have a precise method of aligning different data streams. Our failure to have this resulted in the exclusion of two sessions. Going forward, we propose the use of a distinct auditory signal, or “coda”, that can be played at the beginning of each session so both the continuous audio recorder and the smartphone running the app will have a series of characteristic peak sounds that can be used for timestamping and alignment. The coda currently used for Hyfe-related studies is available on YouTube. Secondly, although solicited coughs have been used to validate cough-counting devices in the past 20 and previous literature reports that spontaneous and solicited coughs have similar acoustic characteristics, 21 we found significant differences in the sound of solicited coughs from different study participants. When asked to voluntarily cough, ten of the 49 research subjects generated sounds that were not unanimously recognized as coughs by human annotators. This observation raises questions about the utility of solicited coughs for diagnostic purposes. Finally, there are interpersonal differences in how sounds are classified by annotators. Because of this we had to exclude 88 sounds from the analysis. This has prompted additional efforts to minimize interobserver variability by developing clear operating procedures and training programs for cough annotators. We propose that protocols such as these be shared, and that consensus be sought so as to facilitate comparison of monitoring technology.
As convolutional neural networks are employed by Hyfe – they learn by example. As long as the training data is relatively unbiased and representative, a neural net can identify a “feature” (such as the acoustic signature of a cough) in a myriad of samples, even if those samples do not resemble each other. After this study, we believe that labeling cough duration rather than just its beginning has more value in further training Hyfe’s AI model, also in analyzing agreement between human listeners, and agreement with Hyfe ( Figure 7). Therefore, the updated 6-tier version SOP was proposed, which is currently being used for cough labeling in continuous audio recordings.
Figure 7.
Example labels indicating cough placed by a human listener in a single second.
Purple arrows indicate labels placed in this study, according to the 4-tier SOP. The gray arrow indicates how this audio segment would be annotated using the updated 6-tier SOP.
Environmental sounds may interfere with capturing coughs in real life, as seen in the sensitivity of session 38 ( Figure 3), however, continuous improvements of the AI peak detection models and the cough classifiers, may address this potential issue in the near future. Even though we have not observed any significant differences in the quality of smartphones used in this trial, there might be cases when the version of smartphone operating system plays some role in smartphone’s general usability and experience for the user.
Overcoming these challenges, we were able to evaluate Hyfe’s accuracy using 1679 solicited sounds generated by a total of 37 subjects. Hyfe’s overall sensitivity and specificity were respectively 91% and 98% and did not differ significantly between two phones. Importantly, we feel the more relevant parameter of performance to be the Pearson correlation of the cough rates as measured by the device and the gold standard (human annotation), which was 0.968. We propose that going forward, analysis of cough monitors should use correlation in rates (gold standard vs monitor detections) as the primary metric of their performance. Though we used a minute (due to the highly condensed nature of the study), in most continuous monitoring use cases, coughs per hour is likely to be the most clear and useful period of observation.
Of note, the performance was lower in four subjects, presumably due to the intrinsic acoustic characteristics of solicited coughs and the level of background noise.
Our own data from more than 400 hours monitoring multiple patients with respiratory diseases in real-world environments shows a clear correlation between total coughs and cough seconds – this work is being prepared for publication. We are also analysing cough-seconds and the notion of bouts in the continued work. In the meantime, the objective of this work was to analyse the performance in detecting sounds, capturing and classifying coughs from solicited sounds in a controlled environment.
Further validation studies will need to be conducted in the specific clinical settings in which Hyfe is intended to be used. To better contextualize and design such trials, target product performance specifications will be required and are expected to differ significantly between use cases. Lessons can be learned from other types of monitors such as fitness trackers, whose results can differ from each other by up to 30%. 22 Whereas, regulated medical devices used in clinical practice will require greater precision. The presented data here is encouraging, suggesting that Hyfe’s performance is adequate to proceed to validation in clinical context. Taken together, these results show that AI-enabled systems might provide a valuable tool for objectively, and unobtrusively monitoring cough.
Data availability
Underlying data
Github: hyfe-ai/navarra_performance, https://doi.org/10.5281/zenodo.7936608. 23
This project contains the following R scripts and data:
• 01.results. R (takes pre-formatted datasets and carries out performance evaluation, plots results) • detections.csv (Hyfe detections data) • hyfe_performance.R (analysis of Hyfe performance) • labels.csv (human labeled data) • offsets_emk.csv (automatic and manual offsets made to the data)
Software
Software available from:
R version 2.04 (RStudio Team, 2020), available from https://cran.r-project.org/bin/windows/base/old/4.0.2/
Hyfe, version acl 1.24.4, available from https://www.hyfe.ai/
Audacity | Free, open source, cross-platform audio software for multi-track recording and editing, available from https://www.audacityteam.org/
1 Cornford, CS. : Why patients consult when they cough: a comparison of consulting and non-consulting patients. Br. J. Gen. Pract.; [Internet] Royal College of General Practitioners; 1998 [cited 2022 Jan 19];; 48, :1751–1754. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/10198482]
2 Motulsky, A; Weir, DL; Liang, MQ et al. : Patient-initiated consultations in community pharmacies. Res. Soc. Adm. Pharm.; 2021 [cited 2022 Jan 19];; 17, :428–440. Elsevier Inc. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32201106]
3World Health Organization (WHO): WHO operational handbook on tuberculosis. Module 2: screening - systematic screening for tuberculosis disease. Modul. 3 Diagnosis Rapid diagnotics Tuberc. diagnosis. 2021 [cited 2022 Jan 19]; .Reference Source
4 Tashkin, DP; Volkmann, ER; Tseng, CH et al. : Improved Cough and Cough-Specific Quality of Life in Patients Treated for Scleroderma-Related Interstitial Lung Disease: Results of Scleroderma Lung Study II. Chest.; 2017;; 151, :813–820. Elsevier Inc. [DOI: https://dx.doi.org/10.1016/j.chest.2016.11.052] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28012804]
5 McCallion, P; De Soyza, A. : Cough and bronchiectasis. Pulm. Pharmacol. Ther. Pulm Pharmacol Ther.; 2017 [cited 2022 Jan 19];; 47, :77–83. [DOI: https://dx.doi.org/10.1016/j.pupt.2017.04.010] Reference Source
6 Kvapilova, L; Boza, V; Dubec, P et al. : Continuous Sound Collection Using Smartphones and Machine Learning to Measure Cough. Digit. Biomarkers.; 2019;; 3, :166–175. S. Karger AG. [DOI: https://dx.doi.org/10.1159/000504666] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32095775]
7 Irwin, RS. : Assessing cough severity and efficacy of therapy in clinical research: ACCP evidence-based clinical practice guidelines. Chest.; 2006 [cited 2022 Jan 19];; 129, :232S–237S. [DOI: https://dx.doi.org/10.1378/chest.129.1_suppl.232S] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16428716]
8 Birring, SS; Fleming, T; Matos, S et al. : The Leicester Cough Monitor: Preliminary validation of an automated cough detection system in chronic cough. Eur. Respir. J.; 2008 [cited 2021 Dec 22];; 31, :1013–1018. [DOI: https://dx.doi.org/10.1183/09031936.00057407] Reference Source
9 Crooks, MG; Hayman, Y; Innes, A et al. : Objective Measurement of Cough Frequency During COPD Exacerbation Convalescence. Lung.; 2016 [cited 2021 Dec 22];; 194, :117–120. [DOI: https://dx.doi.org/10.1016/j.resinv.2017.08.008] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26280405]
10 Soliński, M; Łepek, M; Kołtowski, Ł. : Automatic cough detection based on airflow signals for portable spirometry system. Informatics Med. Unlocked.; 2020;; 18, :100313. Elsevier. [DOI: https://dx.doi.org/10.1016/j.imu.2020.100313]
11 Porter, P; Abeyratne, U; Swarnkar, V et al. : A prospective multicentre study testing the diagnostic accuracy of an automated cough sound centred analytic system for the identification of common respiratory disorders in children. Respir. Res.; 2019 [cited 2022 Jan 19];; 20, :81. [DOI: https://dx.doi.org/10.1186/s12931-019-1046-6] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31167662]
12 Gabaldón-Figueira, JC; Keen, E; Rudd, M et al. : Longitudinal passive cough monitoring and its implications for detecting changes in clinical status. ERJ Open Res.; 2022 May 16;; 8, (2):00001–02022. [DOI: https://dx.doi.org/10.1183/23120541.00001-2022] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35586452]
13 Gabaldon-Figueira, JC; Brew, J; Doré, DH et al. : Digital acoustic surveillance for early detection of respiratory disease outbreaks in Spain: A protocol for an observational study. BMJ Open.; 2021 [cited 2022 Jan 19];; 11, :51278. [DOI: https://dx.doi.org/10.1136/bmjopen-2021-051278] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34215614]
14 Gabaldón-Figueira, JC; Keen, E; Giménez, G et al. : Acoustic surveillance of cough for detecting respiratory disease using artificial intelligence. ERJ Open Res.; 2022;; 8, :00053–02022. in press. [DOI: https://dx.doi.org/10.1183/23120541.00053-2022] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35651361]
15Audacity®|Free, open source, cross-platform audio software for multi-track recording and editing: [cited 2022 Jan 24]; .Reference Source
16 Smith, JA; Earis, JE; Woodcock, AA. : Establishing a gold standard for manual cough counting: video versus digital audio recordings. Cough.; 2006 Aug 3;; 2, :6. [DOI: https://dx.doi.org/10.1186/1745-9974-2-6] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16887019]
17 Lake, C; Briffa, P; Munoz, P et al. : Documentation of cough provoked during a mannitol challenge using acoustic respiratory monitoring compared to video surveillance monitoring [Conference Abstract]. Respirology.; 2012;; 17, :1.
18 Hajian-Tilaki, K. : Sample size estimation in diagnostic test studies of biomedical informatics. J. Biomed. Inform.; 2014;; 48, :193–204. Academic Press. [DOI: https://dx.doi.org/10.1016/j.jbi.2014.02.013] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24582925]
19 Arfin, WN. : wnarifin.github.io > Sample size calculator. Sample size Calc.; 2021 [cited 2022 Jan 19]; .Reference Source
20 Vizel, E; Yigla, M; Goryachev, Y et al. : Validation of an ambulatory cough detection and counting application using voluntary cough under different conditions. Cough Bio. Med. Central.; 2010 [cited 2022 Jan 19];; 6, :1–8. [DOI: https://dx.doi.org/10.1186/1745-9974-6-3]
21 Korpáš, J; Sadloňová, J; Vrabec, M. : Analysis of the cough sound: An overview. Pulm. Pharmacol. Pulm Pharmacol.; 1996 [cited 2022 Jan 19];; 9, :261–268. [DOI: https://dx.doi.org/10.1006/pulp.1996.0034] Reference Source
22 Jagim, AR; Koch-Gallup, N; Camic, CL et al. : The accuracy of fitness watches for the measurement of heart rate and energy expenditure during moderate intensity exercise. J. Sports Med. Phys. Fitness.; 2021 [cited 2022 Jan 19];; 61, :205–211. [DOI: https://dx.doi.org/10.23736/S0022-4707.20.11151-4] Reference Source
23 Galvosas, M; Gabaldón-Figueira, JC; Keen, EM et al. : Performance evaluation of the smartphone-based AI cough monitoring app - Hyfe Cough Tracker against solicited respiratory sounds. F1000 Research.; 2023;. [DOI: https://dx.doi.org/10.5281/zenodo.7936608]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright: © 2023 Galvosas M et al. This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 Research and Development Department, Hyfe Inc., Wilmington, Delaware, USA
2 Department of Microbiology and Infectious Diseases, Clinica Universidad de Navarra, Pamplona, Spain
3 School of Pharmacy and Nutrition, University of Navarra, Pamplona, Spain
4 Research and Development Department, Hyfe Inc., Wilmington, Delaware, USA, Department of Global Health, University of Washington, Seattle, Washington, USA
5 Research and Development Department, Hyfe Inc., Wilmington, Delaware, USA, Department of Mathematics and Computer Science, Sewanee The University of the South, Sewanee, Tennessee, USA
6 Immunopathology Axis, Research Center, University of Montreal Hospital Center, Montreal, Canada, Department of Microbiology, Infectious Diseases and Immunology, University of Montreal, Montreal, Canada
7 Department of Microbiology and Infectious Diseases, Clinica Universidad de Navarra, Pamplona, Spain, ISGlobal, Hospital Clinic, University of Barcelona, Barcelona, Spain, Centro de Investigación Biomédica en Red de Enfermedades Infecciosas, Madrid, Spain