This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
1. Introduction
Patient safety is the foundation of healthcare services [1], and reporting systems are important tools through which one can describe medical errors and the circumstances under which these occur [2]. It is estimated that preventable medical errors result in a total medical loss of between US$17 and US$29 billion per year [3]. Baker et al. have shown that the accident rate for inpatients ranges from 2.9% to 16.6% [4]. Furthermore, in Taiwan, from 2005 to 2016, falls were always one of the top three types of accidents occurring at medical organizations [5]. In Taiwan, inpatients that fall cost NT$23,339.2 and need to be hospitalized for 6.4 days more than those who do not fall [6]. In addition, Alexander et al. found that the cost of hospitalization of patients with fall-related injuries is 5.3% higher than those without falls [7]. Thus, it is clear that injury due to a fall among hospitalized patients increases hospital spending on both medical resources and nursing care. Therefore, in order to reduce medical expenses and improve the quality of medical services, it is very important to actively guard against the occurrence of falls among patients.
In clinical practice, the Morse Fall Scale (MFS) is one of the commonly used measures of fall risk. It consists of six fall-related items and is used to assess the risk of fall events; the factors include history of falling, secondary diagnosis, ambulatory aid, IV/heparin lock, gait/transferring, and mental status. Previous studies of the MFS have revealed that it has a sensitivity of 73.2% and specificity of 75.1% [8]. In addition to the MFS, other rating scales have been developed for other populations, such as the Hendrich Falls Assessment Tool (HFAT) and the St. Thomas’s Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY). The sensitivity and specificity of the HFAT are 74.9% and 73.9%, respectively [9], while those of the STRATIFY are 93% and 88%, respectively [10, 11]. Researchers in Taiwan have also developed a fall assessment tool for risk evaluation with a sensitivity and specificity of 74.07% and 86.93%, respectively [12]. These tools are designed for particular regions or a specific target hospital, and while they have good performance, they may not be directly applicable to other hospitals. For example, Susan et al. has proposed that the MFS tool should be adjusted when used to assess different populations, and from her point of view, the IV/heparin lock criterion is not applicable in a Chinese hospital [13].
Clinical decision support refers to the provision of clinical knowledge and patient-related information to clinicians or patients, which can be filtered and/or provided at various appropriate time points, in order to enhance patient care [14]. A clinical decision support tool is a tool, the aim of which is to facilitate clinical decision making by healthcare professionals. As an important branch of artificial intelligence, machine learning is aimed at providing computational methodologies that help with knowledge accumulation, bringing about change, while providing relevant updates. The core mission of machine learning is to infer from samples, which means building a model between input and output [15]. Accredited relationships can then be stored in the machine as new knowledge for use when making future predictions, which is a process that mimics the learning behavior in humans. Supervised learning is the most common type of machine learning [16]. In supervised learning, training data (input) and the results (output) already exist, and an algorithm is used to construct a function that is able to map the relationship between input and output. When the output is a discrete value, it is a classification problem, and one can utilize classification to solve problems. When the output is a real value, it is a regression problem, and one can work on the problem using a regression method. Supervised learning is often used to estimate risk [17].
In computer science, an algorithm refers to a series of processes that provide precise answers. Several popular algorithms have been and are used in medicine. The Bayesian network (BN) converges faster when there is limited training data, but it has higher bias; nevertheless, it is a widely used probabilistic classifier [18]. The artificial neural network (ANN) is considered powerful when modeling nonlinear relationships and is being applied increasingly for medical diagnosis and prediction [19]. Logistic regression (LR) has been widely used for many years in medical studies and is a statistical method that gives the probability of dichotomous outcomes using independent variables [20]. This study is aimed at automatically generating a predictive model that is the best fit when carrying out hospital risk assessment using a combination of existing assessment programs and adverse event notification; this will then provide risk prediction of a fall event when a patient is admitted, which will allow appropriate action to be taken.
1.1. Contribution of the Research Work
(A) A novel hybrid machine learning analysis model is proposed for the early detection of fall accidents. The proposed architecture has been trained with patient safety report system (PSRS) datasets, and performance metrics have been calculated and compared with other existing models
(B) The proposed architecture introduces the machine learning and filter methods for feature selection. Also, the pathway to choose the proposed algorithm can increase the high accuracy rate
(C) The final optimization algorithm pathway is proposed for training the features obtained from the different parameters. The machine learning feed-forward layers are designed based on the principle of Pearson correlation
The rest of the paper is organized as follows: Section 2 (Method) presents the related works’ proposed pathway in detail. The working mechanism of LR, BN, ANN, CHAID, C5.0, QUEST, and CART are presented in Section 3 (Results). The dataset descriptions, experimentations, results, findings, and analysis are presented in Section 4 (Discussion). Finally, the paper is concluded in Section 5 (Conclusion) with future enhancements.
2. Method
2.1. Establishing a Decision Support Model
The analytical process used during this study was based on patient admission assessments that had been collected in an electronic health record database together with the fall events recorded in an accident notification database. The entire process was divided into the following steps: data extraction and preparation, feature selection, sample training and testing by machine learning, creation of a computer aid model, and application validation of the prediction model. The framework is shown in Figure 1.
[figure(s) omitted; refer to PDF]
2.2. Data Collection
The data was collected in a standardized form from the hospital, after appropriate institutional review board permission had been obtained from the institute—Cheng Hsin General Hospital (CHGH-IRB no: (003)101-29). Informed consent was obtained from all subjects. All methods were carried out in accordance with relevant guidelines and regulations. The study was carried out in accordance with the Declaration of Helsinki. This study analyzed the fall events collected by a patient safety reporting system. The fall risk prediction model was constructed using hospital admission assessment data regarding whether an accident might occur involving a hospitalized patient and, if an accident happens, the probability of the accident being a fall event. The data used in this study originated from the admission assessment and reporting system data collected by a particular regional teaching hospital in Northern Taiwan. The data was collected between December 2012 and December 2013 (before data extraction, the sample size was 748 patients). During the machine learning part of this study, data extraction covered May to December 2013 (after data extraction, sample size is 405 patients).
2.3. Feature Selection
A total of 22 input variables believed to possibly be associated with fall events were collected by the notification system (Supplementary Table 1). These variables were age, gender, single status or having a partner, number of diagnoses, educational level, level of activity, drugs used, use of an ambulatory aid, use on the individuals of restraints, marital status, religion, history of falls within the last month, various health factors, various environmental factors, various therapy factors, shift on which the event occurred, place where the event occurred, whether the individuals were conscious or not, degree of severity of the fall, actions taken after the fall, whether there was legal conflict over the fall, and whether there were complications due to the fall. In order to provide accident risk prediction for patients when they are undergoing admission assessment, the variables collected after the accident were excluded from the dataset, namely, the last four.
Feature selection was carried out using the following two criteria. First, the
2.4. Model Construction and Performance Measure
Data from May to December 2012 were selected in order to establish the machine learning model. Each individual’s data was then randomly assigned into two mutually exclusive datasets. In the first group, 75% of the samples were used to construct the training model, while with the second group, the remaining 25% were used for model testing. The best model was selected based on the various performance measures. After selection, the best model was applied to verify the accuracy of fall event prediction using all patients involved in accidents from January to February 2013.
Models created by various high-performance algorithms, namely, LR, BN, ANN, and CHAID, were used as the machine learning algorithms for analysis of the 25% test partition group. The accuracy, sensitivity, specificity, PPV, and NPV were calculated for each algorithm. The ROC was also plotted, and the AUC was then calculated to compare the performance of each of the models [21]. Supplementary Table 2 shows a
Each predicted value on the ROC curve has a coordinate (
3. Results
3.1. Integrating Feature Selection into Machine Learning
Table 1 presents the descriptive statistics and exclusion rules after integrating feature selection in machine learning. A total of 405 accidents were included in this analysis, and these consisted of 168 fall events and 237 other accidents. There were 12 variables and 26 categories of features. Among “all patients with accidents,” 72.1% had an independent activity state, 62.0% were without restraints, 84.4% did not require an ambulatory aid or mobility aids, 68.9% did not take drugs affecting balance, 96.5% were without a fall history within one month, 64.4% were over 65 years of age, 50.1% had a partner when admitted, 53.6% were male, 99.5% had only one diagnosis, 75.3% were married, 87.2% had a university degree or had attended senior/vocational high school, and 65.2% indicated they had a religion.
Table 1
Descriptive statistics of variables for feature selection.
Category | Amount ( | Other accident ( | Fall accident ( | Feature importance ( | ||
Activity ability | Independent | 292 (72.1%) | 237 (100%) | 55 (32.7) | 221.100 | 1.000a |
Assistance | 109 (26.9%) | 0 (0%) | 109 (64.9%) | |||
Rely | 4 (1.0%) | 0 (0%) | 4 (2.4%) | |||
Band/restraints | Have | 157 (38.0%) | 154 (65.0%) | 3 (1.8%) | 165.387 | 1.000a |
None | 248 (62.0%) | 83 (35.0%) | 165 (98.2%) | |||
Ambulatory aid | None | 342 (84.4%) | 237 (100%) | 105 (62.5%) | 105.247 | 1.000a |
Have | 63 (15.6%) | 0 (0%) | 63 (37.5%) | |||
Drug used | No | 279 (68.9%) | 200 (84.4%) | 79 (47.0%) | 64.040 | 1.000a |
Yes | 126 (31.1%) | 37 (15.6%) | 89 (53.0%) | |||
Fall history (within 1 month) | No | 391 (96.5%) | 237 (100%) | 154 (91.7%) | 20.457 | 1.000 |
Yes | 14 (3.5%) | 0 (0%) | 14 (8.3%) | |||
Age | <65 | 144 (35.6%) | 67 (28.3%) | 77 (45.8%) | 13.235 | 1.000a |
≥65 | 261 (64.4%) | 170 (71.7%) | 91 (54.2%) | |||
Partner (alone) | Yes | 203 (50.1%) | 130 (54.9%) | 73 (73.1%) | 5.111 | 0.976a |
No | 202 (49.9%) | 107 (45.1%) | 95 (26.9%) | |||
Gender | Male | 217 (53.6%) | 121 (51.1%) | 96 (57.1%) | 1.465 | 0.774 |
Female | 188 (46.4%) | 116 (48.9%) | 72 (42.9%) | |||
Diagnosis numbers | 1 | 403 (99.5%) | 235 (99.2%) | 168 (100%) | 1.425 | 0.767 |
≥2 | 2 (0.5%) | 2 (0.8%) | 0 (0%) | |||
Marital status | Single | 20 (4.9%) | 10 (4.2%) | 10 (6.0%) | 0.847 | 0.345 |
Married | 305 (75.3%) | 178 (75.1%) | 127 (75.3%) | |||
Others | 80 (19.8%) | 49 (20.7%) | 31 (19.8%) | |||
Graduate degree | <senior | 353 (87.2%) | 208 (87.8%) | 145 (86.3%) | 0.186 | 0.334 |
≥senior | 52 (12.8%) | 29 (12.2%) | 23 (13.7%) | |||
Religion | None | 141 (34.8%) | 82 (34.6%) | 59 (35.1%) | 0.012 | 0.086 |
Have | 264 (65.2%) | 155 (65.4%) | 109 (64.9%) |
aVariables of analysis in machine learning.
Two criteria were used when carrying out the feature selection. The first criterion was that the value of feature importance must be >0.95. Based on this standard, we excluded gender, diagnosis numbers, marital status, educational level, and religion (0.774, 0.767, 0.345, 0.334, and 0.086, respectively). The second criterion was that we ruled out features with a sample size greater than 95% of all samples, and these were fall history and diagnosis numbers. The reason for the latter exclusion was that these might cause a bias during machine learning. Eventually, six variables were left, and these were activity ability, restraints, ambulatory aid, drug use, age, and having a partner; all of these fulfilled both of the feature selection criteria and thus were used as variables for the machine learning.
3.2. Establishing the Machine Learning Model
The data was collected in a standardized form from the hospital after appropriate institutional review board permission had been obtained from the institute—Cheng Hsin General Hospital (CHGH-IRB no: (003)101-29). Informed consent was obtained from all subjects. All methods were carried out in accordance with relevant guidelines and regulations. The study was carried out in accordance with the Declaration of Helsinki. Data from May to December 2012 were selected as the samples to be used to establish each machine learning model (405 samples). The data were randomly assigned into two mutually exclusive datasets. To do this, 75% (290 samples) were used to construct the training model, and the remaining 25% (115 samples) were used for model testing. The best model was selected based on the performance using the SPSS Modeler 18.0 for analysis. The training model was first processed by universal analysis, and then, the most suitable algorithm was chosen based on the accuracy of the various models. According to data analysis, the best models were the supervised learning ones; therefore, the following seven methods of machine learning were tested: LR, BN, ANN, decision tree chi-squared automatic interaction detector (CHAID), decision tree C5.0, decision tree quick unbiased efficient statistical tree (QUEST), and decision tree classification and regression tree (CART); these are shown in Table 2. Our results showed that the area under the ROC curve (AUC) and accuracies of LR, BN, and ANN were better than the values for C5.0, QUEST, and CART.
Table 2
The suitable models’ AUC and accuracy performance (training partition).
LR | BN | ANN | CHAID | C5.0 | QUEST | CART | |
AUC | 0.984 | 0.984 | 0.983 | 0.975 | 0.973 | 0.966 | 0.966 |
Accuracy (%) | 94.138 | 94.138 | 94.138 | 93.448 | 93.448 | 92.069 | 92.069 |
3.3. Machine Learning Algorithms
The LR, BN, ANN, and CHAID were selected as the machine learning algorithms to be used on the 25% test partition. Positive predict value (PPV), negative predict value (NPV), sensitivity, specificity, receiver operating characteristic curve (ROC), and AUC were analyzed by the four machine learning algorithms. It should be noted that sometimes PPV and sensitivity conflicted with each other. Therefore, we also use
Table 3
The performances of LR, BN, ANN, and CHAID in test clustering and FPM in predicting fall events.
Predict algorithm | Actual | PPV (precision) | NPV | SEN (recall) | SPE | AUC | ||||
Falls | No falls | |||||||||
Testing partition | LR | Falls | 43 | 6 | 0.878 | 0.985 | 0.977 | 0.915 | 0.981 | 0.925 |
None | 1 | 65 | ||||||||
BN | Falls | 41 | 8 | 0.837 | 0.985 | 0.976 | 0.890 | 0.940 | 0.901 | |
None | 1 | 65 | ||||||||
ANN | Falls | 43 | 6 | 0.878 | 0.985 | 0.977 | 0.915 | 0.979 | 0.925 | |
None | 1 | 65 | ||||||||
CHAID | Falls | 44 | 5 | 0.898 | 0.955 | 0.936 | 0.926 | 0.971 | 0.917 | |
None | 3 | 63 | ||||||||
New patient | FPM (LR) | Falls | 27 | 8 | 0.771 | 1.000 | 1.000 | 0.889 | 0.944 | 0.871 |
None | 0 | 64 |
LR: logistic regression; BN: Bayesian network; ANN: artificial neural network; CHAID: chi-squared automatic interaction detector; PPV: positive predict value; NPV: negative predict value; AUC: area under the ROC curve.
[figure(s) omitted; refer to PDF]
The machine learning utilized a large number of layers, as well as huge number of units, and connections. It is well known that overfitting can be a serious problem [22]. Figure 3 integrates the four ROC curves of the training partition, the four ROC curves of the testing partition, and the ROC curve of the fall prediction model. Our results showed that overfitting did not happen during the validation partition.
[figure(s) omitted; refer to PDF]
4. Discussion
Establishing a patient safety event management system is one way of monitoring medical quality. Machine learning is able to provide computational methods that use the accumulation of changes in updating of knowledge via intelligent systems. Using reasoning based on the available samples, accredited relationships can be stored as new knowledge and are available for future predictions [15]. The informationization of medicine has become a major trend regarding present day medical recordings. Electronic health record (EHR) systems are able to record a basic assessment of a patient at the time of admission. In this study, data on patient safety were used to create a fall prediction model (FPM) that could be used when a patient is admitted.
The Morse Fall Scale is currently being used in numerous medical institutions throughout the worldwide [13, 23, 24]. Morse et al. showed that sensitivity of the MFS scale was 78% and the positive predictive value was 10.3%. In addition, it had a specificity of 83% and a negative predictive value of 99.3% [8]. Baek et al. examined the validity of the MFS by analyzing the electronic medical records regarding fall risk using a Korean population. They found a sensitivity of 0.72, a specificity of 0.91, a positive predictive value of 0.63, and a negative predictive value of 0.94. Using HFAT, this scale had a sensitivity of 74.9% and a specificity of 73.9% [9], while STRATIFY had a sensitivity of 93% and a specificity of 88% [10]. Compared to the above results, our approach demonstrated good prediction in terms of performance. Furthermore, in terms of sensitivity and specificity, the FPM in this study had a sensitivity of 100%, a specificity of 88.9%, a PPV of 77.1%, and an NPV of 100%.
Compared to using a traditional predictive model, our approach has two major advantages. First, the features of the prediction model are dynamic. Rather than being fixed, these features are dependent on the variables chosen, the time used for sampling, and the sample size at the time the model is being constructed. Second, the method of model construction is not limited to a specific hospital or population, and any hospital can use this method to construct an appropriate prediction model. Thus, the content of the assessment will change in response to the regional characteristics of the hospital and the characteristics of people who are receiving medical treatment.
When we use machine learning, we are able to identify novel information. The most interesting finding in this study was that having a partner or not is a significant factor in relation to having a fall when hospitalized. This factor is not included in any of the four reference fall assessment models described above. Also of interest was the fact that “fall history” is not included in our prediction model although it is believed to be important when assessing the risk of falling; this may be a sample size problem and increasing the training sample might bring this factor into the fold.
5. Conclusion
In this paper, we refer to previously published article methods [25, 26] (Supplementary Table 3) and created a pathway which applies machine learning methods to predict the occurrence of incidents. Our method improves the accuracy of incidents with limited parameters and flexible parameters (Supplementary Table 4). An invention patent [27] and system had been used since 2012 until now in one of Taiwan’s medical centers. To continuously improve research methods, dimensionality reduction techniques [28] will be used in future research.
Authors’ Contributions
WRH guided the study conceptualization, its design, the data collection, and the analysis and authored the article. RCT and WCC participated in the conceptualization and the data analysis and authored the article.
[1] B. Ulrich, T. Kear, "Patient safety and patient safety culture: foundations of excellent health care delivery," Nephrology Nursing Journal, vol. 41 no. 5, pp. 447-456, 2014.
[2] M. Chiang, "Promoting patient safety: creating a workable reporting system," Yale Journal on Regulation, vol. 18, 2001.
[3] M. S. Donaldson, J. M. Corrigan, L. T. Kohn, To err is human: building a safer health system, vol. 6, 2000.
[4] G. R. Baker, P. G. Norton, V. Flintoft, R. Blais, A. Brown, J. Cox, E. Etchells, W. A. Ghali, P. Hébert, S. R. Majumdar, M. O'Beirne, L. Palacios-Derflingher, R. J. Reid, S. Sheps, R. Tamblyn, "The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada," CMAJ, vol. 170 no. 11, pp. 1678-1686, DOI: 10.1503/cmaj.1040498, 2004.
[5] "Taiwan Patient-safety Reporting system," Annual Report, 2016. https://www.patientsafety.mohw.gov.tw/files/file_pool/1/0m103589120365833052/2016%e5%b9%b4tpr%e5%b9%b4%e5%a0%b1_online.pdf
[6] Y. C. Chen, L. H. Lin, S. F. Chien, "Correlation between risk factors and resource utilization in fall-related injuries among hospitalized patients," Tzu Chi Nursing Journal, vol. 1, pp. 66-77, 2002.
[7] B. H. Alexander, F. P. Rivara, M. E. Wolf, "The cost and frequency of hospitalization for fall-related injuries in older adults," American Journal of Public Health, vol. 82 no. 7, pp. 1020-1023, DOI: 10.2105/AJPH.82.7.1020, 1992.
[8] J. M. Morse, R. M. Morse, S. J. Tylko, "Development of a scale to identify the fall-prone patient," Canadian Journal on Aging/La Revue canadienne du vieillissement, vol. 8 no. 4, article S0714980800008576, pp. 366-377, DOI: 10.1017/S0714980800008576, 1989.
[9] A. L. Hendrich, P. S. Bender, A. Nyhuis, "Validation of the Hendrich II Fall Risk Model: a large concurrent case/control study of hospitalized patients," Applied Nursing Research, vol. 16 no. 1, article S0897189702109025,DOI: 10.1053/apnr.2003.016009, 2003.
[10] D. Oliver, M. Britton, P. Seed, F. C. Martin, A. H. Hopper, "Development and evaluation of evidence based risk assessment tool (STRATIFY) to predict which elderly inpatients will fall: case-control and cohort studies," BMJ, vol. 315 no. 7115, pp. 1049-1053, DOI: 10.1136/bmj.315.7115.1049, 1997.
[11] D. Oliver, A. Papaioannou, L. Giangregorio, L. Thabane, K. Reizgys, G. Foster, "A systematic review and meta-analysis of studies using the STRATIFY tool for prediction of falls in hospital patients: how well does it work?," Age and Ageing, vol. 37 no. 6, pp. 621-627, DOI: 10.1093/ageing/afn203, 2008.
[12] H. C. Chung, S. C. Chang, J. Y. Lyu, T. C. Hsieh, "Using ROC curve analysis to assess the accuracy of short form inpatient fall risk assessment tool," Tzu Chi Nursing Journal, vol. 14, pp. 62-73, 2015.
[13] S. K. Chow, C. K. Y. Lai, T. K. S. Wong, L. K. P. Suen, S. K. F. Kong, C. K. Chan, I. Y. C. Wong, "Evaluation of the Morse Fall Scale: applicability in Chinese hospital populations," International Journal of Nursing Studies, vol. 44 no. 4, article S0020748905002610, pp. 556-565, DOI: 10.1016/j.ijnurstu.2005.12.003, 2007.
[14] J. A. Osheroff, E. A. Pifer, D. F. Sittig, R. A. Jenders, J. M. Teich, Clinical Decision Support Implementers’ Workbook, 2004.
[15] E. Alpaydim, Introduction to Machine Learning, 2010.
[16] I. Kononenko, "Machine learning for medical diagnosis: history, state of the art and perspective," Artificial Intelligence in Medicine, vol. 23 no. 1, article S093336570100077X, pp. 89-109, DOI: 10.1016/S0933-3657(01)00077-X, 2001.
[17] R. C. Deo, "Machine learning in medicine," Circulation, vol. 132 no. 20, pp. 1920-1930, DOI: 10.1161/CIRCULATIONAHA.115.001593, 2015.
[18] A. Y. Ng, M. I. Jordan, "On discriminative vs. generative classifiers: a comparison of logistic regression and naive bayes," Advances in neural information processing systems, vol. 14, pp. 841-848, 2002.
[19] C. C. Lin, C. L. Shih, H. H. Liao, C. H. Wung, "Learning from Taiwan patient-safety reporting system," International Journal of Medical Informatics, vol. 81 no. 12, pp. 834-841, DOI: 10.1016/j.ijmedinf.2012.08.007, 2012.
[20] V. C. Chang, M. T. Do, "Risk factors for falls among seniors: implications of gender," American Journal of Epidemiology, vol. 181 no. 7, pp. 521-531, DOI: 10.1093/aje/kwu268, 2015.
[21] J. A. Hanley, B. J. McNeil, "The meaning and use of the area under a receiver operating characteristic (ROC) curve," Radiology, vol. 143 no. 1, pp. 29-36, DOI: 10.1148/radiology.143.1.7063747, 1982.
[22] D. M. Hawkins, "The problem of overfitting," Journal of Chemical Information and Computer Sciences, vol. 44 no. 1,DOI: 10.1021/ci0342472, 2004.
[23] N. Nassar, N. Helou, C. Madi, "Predicting falls using two instruments (the Hendrich Fall Risk Model and the Morse Fall Scale) in an acute care setting in Lebanon," Journal of Clinical Nursing, vol. 23 no. 11-12, pp. 1620-1629, DOI: 10.1111/jocn.12278, 2014.
[24] K. S. Kim, J. A. Kim, Y. K. Choi, Y. J. Kim, M. H. Park, H. Y. Kim, M. S. Song, "A comparative study on the validity of fall risk assessment scales in Korean hospitals," Asian Nursing Research, vol. 5 no. 1, pp. 28-37, DOI: 10.1016/S1976-1317(11)60011-X, 2011.
[25] R. C. Tseng, W. R. Huang, S. F. Lin, P. C. Wu, H. S. Hsu, Y. C. Wang, "HBP1 promoter methylation augments the oncogenic β -catenin to correlate with prognosis in NSCLC," Journal of Cellular and Molecular Medicine, vol. 18 no. 9, pp. 1752-1761, DOI: 10.1111/jcmm.12318, 2014.
[26] W. R. Huang, C. J. Hsieh, K. C. Chang, Y. J. Kiang, C. C. Yuan, W. C. Chu, "Network characteristics and patent value—evidence from the light-emitting diode industry," PLoS One., vol. 12 no. 8, article e0181988,DOI: 10.1371/journal.pone.0181988, 2017.
[27] W. R. Huang, Republic of China Patent No. I704509B, 2020.
[28] K. Ramana, C. C. Lin, C. L. Shih, H. H. Liao, C. H. Wung, "Early prediction of lung cancers using deep saliency capsule and pre-trained deep learning frameworks," Cancer Imaging and Image-directed Interventions, vol. 12,DOI: 10.3389/fonc.2022.886739, 2022.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2022 Way-Ren Huang et al. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Patient safety has always been an important issue when improving the quality of medical care. The first step when preventing accidents is to screen for the high-risk groups that are prone to accidents. A patient safety reporting system is one of the best tools for such screening. We used machine learning techniques to analyze events involving falling and establish a risk prediction model. The results are then fed back to medical organizations with the aim of raising their quality of medical care. Bayesian network, artificial neural network, logistic regression, and decision tree-based chi-square automatic interaction detection were applied to analyze a database covering a 14-month period from November 2012 to December 2013, and the area under the ROC curve (AUC) values were 0.940, 0.979, 0.981, and 0.971, respectively. Next, data from January to February 2013 was verified by the model showing the highest discrimination ability, namely, logistic regression, and the AUC and
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer