About the Authors:
Henry Lenzi
Contributed equally to this work with: Henry Lenzi, Ângela Jornada Ben
Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Writing – original draft, Writing – review & editing
Affiliation: Serviço de Saúde Comunitária–Grupo Hospitalar Conceição, Porto Alegre, Brazil
Ângela Jornada Ben
Contributed equally to this work with: Henry Lenzi, Ângela Jornada Ben
Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Supervision, Validation, Writing – original draft, Writing – review & editing
* E-mail: [email protected]
Affiliation: Department of Health Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
ORCID logo http://orcid.org/0000-0003-4793-9026
Airton Tetelbom Stein
Roles Conceptualization, Methodology, Supervision, Writing – original draft, Writing – review & editing
Affiliations Serviço de Saúde Comunitária–Grupo Hospitalar Conceição, Porto Alegre, Brazil, Departamento de Saúde Coletiva, Universidade Federal de Ciências da Saúde de Porto Alegre, Porto Alegre, Brazil
Introduction
Patient no-show is defined as a scheduled appointment that the patient neither attended or canceled on time to be reassigned to another patient [1,2]. It implies ineffective use of human and logistic resources in a scenario where the demand for health care is greater than the supply. Beyond that, the patient non-attendance could compromise the core principles of primary care: the accessibility and the continuity of care [3]. Whenever a patient misses an appointment, two patients fail to access health care: the no-show patient and the patient who could not book an appointment. Also, patient non-attendance leads to a discontinuity of care, which is associated with worsening of health outcomes such as increasing of hospitalization rates due to exacerbations of chronic conditions [4–6]. There are also additional costs, e.g., time spent on mitigation strategies and health care staff idle time [7].
The prevalence of no-show varies worldwide. It has been shown to be higher in low income and developing countries [1]. Dantas et al., in a literature review, found the second highest no-show prevalence in South America (27.8%) after the African continent (43.0%) [1]. In Brazil, despite the shortage of data on this issue, studies have reported no-show rates of 48.9% at primary care [8] and 34.4% at specialized point-of-care service [9]. It has been described that decreasing no-show rates could have resulted in substantial savings especially in universal health care systems[10]. For instance, in the National Health Service of the United Kingdom, a reduction in no-show prevalence from 12% to 10.8%, would decrease the annual public expenses by 10% [10].
Given the aforementioned, factors associated with patient no-show have been investigated to provide insights about target interventions. Young age and previous patient non-attendance have been consistently reported [11–16]. An association has also been found between longer lead time (the time between the scheduling and the appointment) and higher no-show rates [1]. Other factors are related to the type and severity of the problem; sociodemographic conditions; appointment period of the year and distance to service [17]. Since these factors may vary across populations and health care services, a common set of universal determinants is unlikely to be found. Hence, this implies that it behooves each service to investigate local predictors, to tailor actions to address the issue. Based on that, no-show predictive models have been developed to optimize the scheduling process and service performance, but mainly in developed countries setting [17–21].
To the best of our knowledge, there have been no published studies about developing no-show predictive models based on data from a Brazilian public health care scenario. Therefore, the present study aims to explore the factors associated with a no-show at a Brazilian primary care setting and to develop and validate a patient no-show predictive model based on empirical data.
Materials and methods
Study design
A retrospective study was performed based on the scheduled appointments registered in the scheduling system of a public primary care service of the Grupo Hospitalar Conceição between November 1, 2011, and March 31, 2014. Patient record numbers were irreversibly replaced by a sequence of random characters (fully anonymized) before the analysis. The study was approved by the Ethics Committee of the Grupo Hospitalar Conceição (number 2.349.672). Ethics committee waived the requirement for informed consent.
Patient no-show predictors
The no-show predictors were chosen based on literature, on the experience of the primary care service team and considering the availability of data in the scheduling system. The unit of analysis was the scheduled appointment. No-show was defined as non-attendance at the appointment day until the closing time of the service at 6 pm. For each unit of analysis, the following variables were available in the scheduling system: patient record number; age (years); gender (male/female); self-reported race/ethnicity registered in the scheduling system according to the Ethno-Racial Characteristics of the Brazilian Population [22] and dichotomized as white and non-white; appointment day; date and time of the scheduling; date and time of the appointment; appointment shift (morning or afternoon); appointment weekday; appointment month; appointment attendance (attendance/no-show); health professional categories (nursing, dentist, general practitioner, pharmacist, nutritionist, psychologist, social worker and oral health technician) and types of appointment–S1 Table.
For each appointment scheduled, the following metrics were calculated: “lead time” (the time between the scheduling and the appointment in days); “waiting time” (difference between the appointment time and the time it was held in minutes); “patient previous attendance” (number of times the patient has attended the previous appointments) and “patient previous same-day appointment” (number of times the patient had previous same-day appointments). A dichotomous variable “same-day appointment calculated” was generated for those appointments scheduled and held on the same day (1) or not (0) to verify consistency with the category “same-day appointment” of the variable “types of appointment”–S1 Table.
Observations were included if they did not fulfill the exclusion criteria. Observations were excluded and deleted from the analysis if: 1) the information on the outcome was not registered, and 2) there was no possibility to derive metrics from the available data.
Statistical analysis
Descriptive analysis of the dataset.
First, a descriptive analysis of variables was performed by the categorical outcome: attendance (0) and no-show (1). Means and standard deviations were calculated for normally distributed variables and medians and quartiles for nonparametric variables. Frequencies and percentages were calculated to describe categorical variables. The normality of continuous variables was evaluated by the D’Agostino skewness test [23]. All variables were included as predictors in the process of model selection. Analyses were performed in R software, version 3.5.2.
Model development and selection.
Afterwards, the dataset was randomly divided into two subsets by using the Caret R package [24]: 1) 50% of the dataset was assigned to develop the logistic regression model (training subset) and 2) the remaining 50% was assigned to validate the model (validation subset). Subsequently, a naïve logistic regression was performed along with the Akaike Information Criteria (AIC) to select the most parsimonious model (best model) in a stepwise backward algorithm by using the stepAIC function [25]. In the stepwise backward AIC, the selection process starts with a model including all variables of interest. At each step, a variable is excluded from the model if its elimination results in a higher AIC value than the previous model [25]. The best model is defined as the one with the lowest AIC value compared to the other possible explanatory models. Thirteen variables of interest were considered with their respective dummy variables resulting in 8,191 possible models. Additionally, a mixed-effect model was developed considering patients and health professionals as random intercepts and hence, accounting for the variance between- and within-patients and professionals on the outcome [26]. The glmer function was used to perform the mixed-effect model analyses [27]. To select the best mixed-effect model, a forward and backward stepwise algorithm was performed based on the AIC criteria. The intra-class correlation coefficient (ICC) [26] within-clusters was calculated for the best mixed-effect model to verify within-clusters dependency using the sjstats R package [28]. The best naïve model and the best mixed-effect model were compared, and the model with the smallest AIC value was considered the final best model. The variable importance in the best final model was estimated using the permute.varimp R function [29]. In this function, the values of each predictor are randomly permuted to break their association with the response, and the model is re-fit to a new dataset containing the permuted values [29]. The fit of the new model is compared to that of the original model [29]. The variables presenting the higher AICc difference between the original model and the model with the permuted predictor were considered as having higher importance in the model.
Akaike inflection criteria.
The Akaike Information Criteria (AIC) was developed by Hirotugu Akaike to identify which combination of variables would best explain an outcome given a universe of potentially explanatory models [30]. The AIC is defined by the expression: AIC = - 2log (L (θ | y)) + 2K. Where, log (L (θ | y)) represents the maximum likelihood ratio (model quality), and K represents the number of variables included in the model (complexity) [30]. The best fit model is the one with the smallest possible number of parameters (parsimony) with a higher probability of explaining the outcome. This method is indicated to select models, which are developed based on observational data [30,31].
Model performance and validation.
The area under the ROC curve (AUC) was used to assess the performance of the best model by using the pROC R package [32]. The AUC was calculated based on the training and the validation subsets and compared by using the roc.test function [32]. Additionally, the threshold that maximizes the sensitivity and specificity in classifying patients as no-show was identified.
Sensitivity analysis.
A sensitivity analysis was conducted based on a model developed on 80% of the dataset (p80) and validated on the remaining 20% of the dataset to explore if it would improve the final best model performance. The model selection of the final best p80 model was carried out following the methodology applied to select the final best p50 model. The AUC’s of both best models were compared by using the roc.test function [32].
Results
Descriptive analysis of the dataset
Of the 57,586 scheduled appointments in the period, 70.7% (n = 40,740) fulfilled the inclusion criteria including 5,637 patients. The prevalence of no-show was 13.0% (n = 5,282). The mean age of the sample was 41 years (SD 23.2). Thirty percent of the appointments were scheduled by male (n = 12,219) and 82.1% (n = 33,442) by patients self-reported as white. Thirty-six percent of the sample was delivered as same-day appointments (n = 14,653). The mean age of the attendance group was 41.2 years (SD 23.3), and the mean age of the no-show group was 39.8 years (SD 22.0). The median of patient previous attendance was smaller in the no-show (4.0, IQR 7.0) group compared to the attendance group (5.0, IQR 8.0), whereas the median lead time was higher (14.0 days, IQR 17.0) in the no-show group compared to the attendance group (2.2 days, IQR 14.1)–Table 1.
[Figure omitted. See PDF.]
Table 1. Descriptive analysis of the variables by the outcome.
https://doi.org/10.1371/journal.pone.0214869.t001
Model development and selection
The backward stepwise algorithm compared all possible models based on 20,368 scheduled appointments (50% of the dataset). Two observations were excluded due to missingness in the waiting time variable. The stepwise procedure fitted forty models. The best naïve model presented an AIC value of 12,974. The best mixed-effect model presented an AIC value of 12,763 which was smaller than the naïve model and hence it was considered the best final model (p50). Table 2 presented the final best model results including the combination of variables that would best estimate the probability of no show, given a universe of potentially explanatory models. The most important variables in the p50 model were the type of appointment (difference in the AICc = 804), previous attendance (AICc difference = 281), previous same-day appointment (AICc difference = 114) and same-day appointment (AICc difference = 110). The variance within patients was 0.185 (SD 0.430) and within health professionals was 0.020 (SD 0.143). The intra-class correlation coefficient between patients was 0.05 and between health professionals was 0.003.
[Figure omitted. See PDF.]
Table 2. Results of mixed-effect logistic regression of the final best model–p50.
https://doi.org/10.1371/journal.pone.0214869.t002
Model performance and validation
Table 3 presents the comparison between development and validation data. The AUC of the p50 model in the training subset was 84.6% (95% CI 83.9–85.4). The threshold of 0.193 presented the maximum sensitivity of 78.1% and specificity of 77.0% (Fig 1). When the model was validated on empirical data, the AUC was slightly lower 80.9% (95% CI 80.1–81.7) compared to the AUC of the training subset (Fig 1). This difference was statistically significant (p<0.001). The threshold of 0.140 presented the maximum sensitivity of 64.9% and specificity of 83.4% when the p50 model was validated. The high specificity of the model is preferable over high sensitivity, as it avoids staff work overload with false positives of no-shows in the case of overbooking.
[Figure omitted. See PDF.]
Fig 1. Performance of the patient no-show predictive model developed on 50% (p50) of the dataset.
(A) shows the AUC (95% confidence interval) of the p50 model tested on the same subset from which it was developed (training subset). (B) shows the AUC (95% confidence interval) of the p50 model tested on the remaining 50% of the dataset (validation subset). The point in the curve is the threshold that maximizes the sensitivity and specificity of the model. The sensitivity and specificity are in the parenthesis. AIC: Akaike Information Criteria. AUC: area under the Receiver Operating Characteristic curve.
https://doi.org/10.1371/journal.pone.0214869.g001
[Figure omitted. See PDF.]
Table 3. Comparison between development and validation data.
https://doi.org/10.1371/journal.pone.0214869.t003
Sensitivity analysis
The model p80 included all variables of the p50 model except the variable day of the month. When the p80 model was validated on empirical data, the AUC was 81.9 (95% CI 80.6–83.2). Despite the better performance, it was not statistically different compared with the AUC of the p50 model 80.9% (95% CI 80.1–81.7).
Practical application of the predictive patient no-show model
A patient of 20 years-old, non-white, 0 previous attendance and 1 previous same-day appointment scheduled an appointment with the psychologist within 14 days (appointment weekday and month: Monday and March, respectively). The probability of patient-no-show, according to the p50 model was 0.591. Based on the threshold of 0.140, the patient would be classified as a no-show. If an appointment is overbooked in this slot, the model would have a probability of 81% to correctly identify the true positives and negatives of no-show–S1 File.
Discussion
This study explored the factors associated with no-show at a primary care setting in Southern Brazil and developed and validated a patient no-show predictive model based on empirical data. It revealed that previous patient attendance and same-day appointments were the most important predictors of a no-show in the service investigated. More importantly, the results showed that the best model, developed from data already available in the scheduling system, had a good performance with a probability of 81% to correctly identify the true positives and negatives of a patient no-show.
Alike previously published models, our results revealed previous patient attendance as one of the most important predictors of no-show [18–21,33]. Aware of this, Harris et al. developed a predictive no-show model including solely the patient’s past attendance history, observing an accuracy around 0.70 [21], which is slightly lower than our best model and other models with additional factors (i.e., sociodemographic and medical background). Nevertheless, it is difficult to compare the performance of all these models because they (1) were based on different population, (2) included different predictors (i.e., marital status, religion, socioeconomic status, insurance coverage, and comorbidities), (3) used different modelling techniques and (3) considered different methods to estimate patient past attendance history. However, their performances were very similar to what we found, ranging from 0.69 to 0.82 [18–21,33].
Our study differs from the previous models because we used a mixed-effect modelling approach to account for the variance across patients and health professional and developed relatively simple models and compared them using a multimodel inference method. The AIC allows selecting models considering the strength of evidence and uncertainty in the selection process [34]. This information-theoretic approach has been deemed more appropriate to deal with the complexity of the real world problems and has been mainly used by biologists [34]. In this method, the goal is to identify the best fit model given the data available, which is quite different than finding full truth [34,35]. Given the complexity of the no-show issue, which encompasses several factors, the final best model probably did not include all universe of variables to explain the outcome. Aware of this, our study had conducted further testing and, hence, observing a good performance of the final best model to predict patient no-show when validated on empirical data.
Based on the results of this study, one could explore the potential of incorporating the patient no-show predictive model into the scheduling system of the service, which might aid overbooking approaches in programs associated with high non-attendance rates (i.e., Pap smear screening or Hypertension/Diabetes health care program). Additionally, adopting overbooking based on patient no-show predictive models, instead of using only a flat non-attendance prevalence, would avoid false positives and hence avoid excessive work load for the healthcare team [17]. As expected, we found that same-day appointment is less likely of non-show than scheduled appointment ahead of time. In a systematic review, Ansell et al. [36] found that no-show rates decreased after the implementation of same-day appointments, further referred to as “open access scheduling.” The rationale of the open access approach is that patients would have access to care in the time when they need most. On the other hand, open access scheduling may overload the health care staff if the demand exceeds the supply, which could compromise the quality of care [37]. Hitherto, its implementations would require at least a resizing of the service’s patient load [37] and an evaluation of its impact on quality of care [36], which are beyond the scope of this study. However, our results may provide some insight about the daily agenda optimization, aiming at reaching a balance between same day-appointments and advanced-scheduled ones.
This study has some limitations. Despite the advantage of the stepwise algorithm of comparing predictors automatically, it may lead to spurious associations [25,38]. Taking this into account, we focused on select no-show predictors reported on literature and on the experience of the primary health care team instead of just relying on the algorithm’s choice. Another issue regards to missing data excluded from the analysis. However, we assume the missingness was related to data registration issues and not associated with variables of interest. Hence, the exclusion of missing cases is known to produce unbiased estimates and conservative results [39]. Also, we did not include sociodemographic statuses such as educational level and income due to missingness in the database. Further investigation should explore if these inputs improve the performance of predictive no-show models.
Conclusions
This study developed and validated a patient no-show predictive model based on data from a public primary care setting in Southern Brazil. It mainly revealed that using the information already available in the scheduling system, the best fit model presented a good performance to predict no-show when empirically validated. Additionally, the methodology applied in this study may be useful to other health care services to develop predictive no-show models based on their specific population. It is expected this approach to be helpful to overbooking decision in scheduling systems. Further investigation is needed to explore the effectiveness of using this model in terms of improving service performance and its impact on quality of care compared to the usual practice.
Supporting information
[Figure omitted. See PDF.]
S1 Table. Types of appointment: Definition based on the primary care service database.
https://doi.org/10.1371/journal.pone.0214869.s001
(DOCX)
S1 File. No-show predictive model calculator.
Practical application of the predictive patient no-show model.
https://doi.org/10.1371/journal.pone.0214869.s002
(XLSX)
Acknowledgments
The primary care team of the Grupo Hospitalar Conceição for contributing with their experience. In memoriam of our extraordinary colleague José Mauro Ceratti Lopes who introduced the authors.
Citation: Lenzi H, Ben ÂJ, Stein AT (2019) Development and validation of a patient no-show predictive model at a primary care setting in Southern Brazil. PLoS ONE 14(4): e0214869. https://doi.org/10.1371/journal.pone.0214869
1. Dantas LF, Fleck JL, Oliveira FLC, Hamacher S. No-shows in appointment scheduling–a systematic literature review. Health Policy. 2018;122: 412–421. pmid:29482948
2. Tuso PJ, Murtishaw K, Tadros W. The Easy Access Program: A Way to Reduce Patient No-Show Rate, Decrease Add-Ons to Primary Care Schedules, and Improve Patient Satisfaction. The Permanente Journal. 1999;3: 68–71.
3. George A, Rubin G. Non-attendance in general practice: a systematic review and its implications for access to primary health care. Fam Pract. 2003;20: 178–184. pmid:12651793
4. Nguyen DL, DeJesus RS, Wieland ML. Missed Appointments in Resident Continuity Clinic: Patient Characteristics and Health Care Outcomes. J Grad Med Educ. 2011;3: 350–355. pmid:22942961
5. Nuti LA, Lawley M, Turkcan A, Tian Z, Zhang L, Chang K, et al. No-shows to primary care appointments: subsequent acute care utilization among diabetic patients. BMC Health Services Research. 2012;12: 304. pmid:22953791
6. Hwang AS, Atlas SJ, Cronin P, Ashburner JM, Shah SJ, He W, et al. Appointment “no-shows” are an independent predictor of subsequent quality of care and resource utilization outcomes. J Gen Intern Med. 2015;30: 1426–1433. pmid:25776581
7. Berg B, Murr M, Chermak D, Woodall J, Pignone M, Sandler RS, et al. Estimating the Cost of No-shows and Evaluating the Effects of Mitigation Strategies. Med Decis Making. 2013;33: 976–985. pmid:23515215
8. Izecksohn MMV, Ferreira JT. Falta às consultas médicas agendadas: percepções dos usuários acompanhados pela Estratégia de Saúde da Família, Manguinhos, Rio de Janeiro. Rev Bras Med Fam Comunidade. 2014;9: 235–241.
9. Bender A da S, Molina LR, Mello ALSF de. Absenteísmo na atenção secundária e suas implicações na atenção básica. Espaço para a Saúde—Revista de Saúde Pública do Paraná. 2010;11: 56–65.
10. Ellis DA, Jenkins R. Weekday Affects Attendance Rate for Medical Appointments: Large-Scale Data Analysis and Implications. PLoS One. 2012;7. pmid:23272102
11. Bean AG, Talaga J. Appointment Breaking: Causes and Solutions. Journal of Health Care Marketing. 1992;12: 14–21. pmid:10123581
12. Lacy NL, Paulman A, Reuter MD, Lovejoy B. Why We Don’t Come: Patient Perceptions on No-Shows. Ann Fam Med. 2004;2: 541–545. pmid:15576538
13. Kaplan-Lewis E, Percac-Lima S. No-Show to Primary Care Appointments: Why Patients Do Not Come. J Prim Care Community Health. 2013;4: 251–255. pmid:24327664
14. Nancarrow S, Bradbury J, Avila C. Factors associated with non-attendance in a general practice super clinic population in regional Australia: A retrospective cohort study. Australas Med J. 2014;7: 323–333. pmid:25279008
15. Norris JB, Kumar C, Chand S, Moskowitz H, Shade SA, Willis DR. An empirical investigation into factors affecting patient cancellations and no-shows at outpatient clinics. Decision Support Systems. 2014;57: 428–443.
16. Torres O, Rothberg MB, Garb J, Ogunneye O, Onyema J, Higgins T. Risk factor model to predict a missed clinic appointment in an urban, academic, and underserved setting. Popul Health Manag. 2015;18: 131–136. pmid:25299396
17. Huang Y, Hanauer DA. Patient No-Show Predictive Model Development using Multiple Data Sources for an Effective Overbooking Approach. Appl Clin Inform. 2014;5: 836–860. pmid:25298821
18. Daggy J, Lawley M, Willis D, Thayer D, Suelzer C, Delaurentis PC, et al. Using no-show modeling to improve clinic performance. Health informatics journal. 2010;16: 246–259. pmid:21216805
19. Mark Reid P, Samuel Cohen MD, Hank Wang MD, Aung Kaung MD, Anish Patel MD, Vartan Tashjian BS, et al. Preventing Patient Absenteeism: Validation of a Predictive Overbooking Model. American Journal of Managed Care. 2015;21. Available: http://www.ajmc.com/journals/issue/2015/2015-vol21-n12/preventing-patient-absenteeism-validation-of-a-predictive-overbooking-model/
20. Goffman RM, Harris SL, May JH, Milicevic AS, Monte RJ, Myaskovsky L, et al. Modeling Patient No-Show History and Predicting Future Outpatient Appointment Behavior in the Veterans Health Administration. Military Medicine. 2017;182: e1708–e1714. pmid:29087915
21. Harris SL, May JH, Vargas LG. Predictive analytics model for healthcare planning and scheduling. European Journal of Operational Research. 2016;1: 121–131.
22. IBGE. Ethno-Racial Characteristics of the Population | Statistics | Instituto Brasileiro de Geografia e Estatística [Internet]. [cited 16 Feb 2019]. Available: https://www.ibge.gov.br/en/np-statistics/social/population/17590-ethno-racial-characteristics-of-the-population.html?=&t=o-que-e
23. Zhang Z. Univariate description and bivariate statistical inference: the first step delving into data. Ann Transl Med. 2016;4. pmid:27047950
24. Kuhn M. Building Predictive Models in R Using the caret Package | Kuhn | Journal of Statistical Software.
25. Zhang Z. Variable selection with stepwise and best subset approaches. Ann Transl Med. 2016;4. pmid:27162786
26. Twisk JWR. Applied Longitudinal Data Analysis for Epidemiology: A Practical Guide. Cambridge University Press; 2013.
27. Bolker B. lme4-package: Linear, generalized linear, and nonlinear mixed models in lme4: Linear Mixed-Effects Models using “Eigen” and S4—Version 1.1–20 [Internet]. [cited 16 Feb 2019]. Available: https://github.com/lme4/lme4/
28. Lüdecke D. sjstats: Collection of Convenient Functions for Common Statistical Computations [Internet]. 2019. Available: https://CRAN.R-project.org/package=sjstats
29. Grafmiller J. permute.varimp: Permutation variable importance for regression in JGmermod: Custom Functions For Mixed-Effects Regression Models [Internet]. 2017. Available: https://rdrr.io/github/jasongraf1/JGmermod/man/permute.varimp.html
30. Akaike H. A new look at the statistical model identification. IEEE Transactions on Automatic Control. 1974;19: 716–723.
31. Herland M, Khoshgoftaar TM, Wald R. A review of data mining using big data in health informatics. Journal Of Big Data. 2014;1: 2.
32. Robin X, Turck N, Hainard A, Tiberti N, Lisacek F, Sanchez J-C, et al. pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinformatics. 2011;12: 77. pmid:21414208
33. Huang Y-L, Hanauer DA. Time dependent patient no-show predictive modelling development. Int J Health Care Qual Assur. 2016;29: 475–488. pmid:27142954
34. Burnham KP, Anderson DR. Model Selection and Multimodel Inference—A Practical Information-Theoretic Approach [Internet]. Springer; 2002. Available: http://www.springer.com/us/book/9780387953649
35. Grueber CE, Nakagawa S, Laws RJ, Jamieson IG. Multimodel inference in ecology and evolution: challenges and solutions. Journal of Evolutionary Biology. 24: 699–711. pmid:21272107
36. Ansell D, Crispo JAG, Simard B, Bjerre LM. Interventions to reduce wait times for primary care appointments: a systematic review. BMC Health Serv Res. 2017;17. pmid:28427444
37. Kiran T, O’Brien P. Challenge of same-day access in primary care. Can Fam Physician. 2015;61: 399–400. pmid:25971751
38. Anderson DR, Burnham KP. Avoiding Pitfalls When Using Information-Theoretic Methods. The Journal of Wildlife Management. 2002;66: 912.
39. Kang H. The prevention and handling of the missing data. Korean J Anesthesiol. 2013;64: 402–406. pmid:23741561
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019 Lenzi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Patient no-show is a prevalent problem in health care services leading to inefficient resources allocation and limited access to care. This study aims to develop and validate a patient no-show predictive model based on empirical data. A retrospective study was performed using scheduled appointments between 2011 and 2014 from a Brazilian public primary care setting. Fifty percent of the dataset was randomly assigned to model development, and 50% was assigned to validation. Predictive models were developed using stepwise naïve and mixed-effect logistic regression along with the Akaike Information Criteria to select the best model. The area under the ROC curve (AUC) was used to assess the best model performance. Of the 57,586 scheduled appointments in the period, 70.7% (n = 40,740) were evaluated including 5,637 patients. The prevalence of no-show was 13.0% (n = 5,282). The best model presented an AUC of 80.9% (95% CI 80.1–81.7). The most important predictors were previous attendance and same-day appointments. The best model developed from data already available in the scheduling system, had a good performance to predict patient no-show. It is expected the model to be helpful to overbooking decision in the scheduling system. Further investigation is needed to explore the effectiveness of using this model in terms of improving service performance and its impact on quality of care compared to the usual practice.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer