Introduction
Retention in extensive longitudinal epidemiological cohort studies is vital yet challenging [1]. If attrition is high in these studies, research findings may not be valid. Health surveys are an essential component of longitudinal data collection, yet having participants return to complete follow-up surveys can be difficult [2]. There are multiple reasons why individuals don’t fill out follow-up surveys including issues with accessing or submitting the survey, technical issues, not receiving messages, lack of interest, survey being boring or too long, or no time or bad timing [3]. Large cohort studies that used surveys for data collection have had varying response rates, with only a few having online follow-up surveys [4–6]. Prior studies have shown that interventions, such as phone calls, postal mailings, or both have improved follow-up survey completion rates [7–12]. This extensive literature on survey methods shows that multimodal survey methods are more effective than single-mode methods. Survey response rates for electronic, Web-based surveys improve when followed up by letter mailings, phone calls, or both [3], illustrating the importance of multimodal approaches.
The All of Us Research Program, or All of Us, aims to recruit at least 1 million participants from populations of which 75% will be historically underrepresented in biomedical research and retain them over the many years of this longitudinal program [13,14]. For underrepresented populations, All of Us relied on designated definitions of diversity guided by the leading authorities on health disparities and through consultation with a variety of resources and stakeholders as described in our prior work [14]. During a participant’s journey, All of Us collects multiple sources of information, one of which is health surveys completed by participants through a participant portal [15]. All of Us administers health surveys at baseline (i.e., enrollment) and at subsequent time points to collect information on various and timely topics, such as COVID-19. The surveys that participants answered are located here: https://www.researchallofus.org/data-tools/survey-explorer/. The baseline surveys and average completion times in minutes (mean, standard deviation) were: The Basics (6.69,3.48), Lifestyle (2.94, 1.65), and Overall Health (3.1, 1.32). The follow-up surveys and completion times were Health Care Access & Utilization (6.41, 2.72), Personal Health History (7.15, 3.88), and Family Health History (6.61, 3.90). These longitudinal and diverse participant data will inform precision medicine research.
All of Us initially relied on only email communication for follow-up activities. Due to the challenges of a digital-only communication approach and retention of diverse populations [4,6,16], All of Us aimed to understand and address the low response rates to follow-up health surveys. While retention can include multiple activities within All of Us (e.g., additional consents for genetic return of results and completion of follow-up surveys), our focus in this manuscript is on the retention strategies yielding completion of follow-up surveys. There is a scarcity of both theoretical and practical guidance on crafting optimal surveys for mixed-mode data gathering, such as utilizing both telephone and postal methods [17]. Nevertheless, survey creators often opt for a mixed-mode strategy because it allows for mitigating the drawbacks of each mode while maintaining affordability. By blending a primary method with a secondary, pricier one, researchers can benefit from reduced costs and errors compared to a single-mode approach. Mixed-mode designs entail a deliberate balance between expenses and inaccuracies, particularly addressing non-sampling errors like frame or coverage error, nonresponse error, and measurement error [17]. All of Us developed a pilot study that used mixed contact modes and multiple reminders through three retention strategies: 1) telephone appointments, 2) postal mailing, and 3) a combination of telephone appointments and postal mailing. The hypothesis was that completion rates of follow-up surveys would increase in at least one of the retention arms and that the telephone appointment arm would be more effective than the postal arm and less effective than the combination arm. In this manuscript, we describe the methods and results of the pilot study to inform All of Us’ efforts to increase participant retention by completing the follow-up surveys.
Methods
Design of pilot and participating sites
Fig 1 describes the goal and design of the pilot. The pilot period was from April 27, 2020, to August 3, 2020. A subset of the All of Us participating clinic sites (N = 50) chose their study arm based on their resource capabilities. Staff at the participating sites were instructed to adhere to the methods for the study (see Standard Operating Procedures appendix). All of Us participants are consented when they join the program. Since participants contributed their data towards the All of Us Research Program, where consent was obtained for the program, they were aware that their data were going to be used for research purposes. The All of Us Institutional Review Board determined that this study did not meet the criteria for human subjects research.
[Figure omitted. See PDF.]
Fifty sites affiliated with All of Us volunteered for one of the three pilot arms. The horizontal bar shows the progression of activities completed at the site level, using the combination arm as an example.
Identification of participants in the intervention and control groups
The All of Us Research Program aims to enroll a diverse group of at least one million people living in the United States, regardless of health status, age, race, ethnicity, sexual orientation, gender identity, or socioeconomic status. The goal is to gather health data from a wide range of individuals to better understand how genetics, lifestyle, environment, and other factors contribute to disease and overall health [13]. Eligible participants comprised those who had consented and completed the three baseline surveys but not all three follow-up surveys. Study sites identified the eligible participants for recruitment. Participants from other All of Us sites were not included in our analyses because they had been exposed to a retention activity not associated with any of the methods used in this study during the study period.
Control and interventions
Study sites chose their study arm and participants were not randomized. Participants in all arms and controls were asked to fill out all surveys. The eligible participants from the study sites who were not assigned to the pilot interventions from the study sites made up the control group. The control group also included participants from an additional site who were not exposed to any retention pilot or activity. These participants were included to increase the size of the control group and thus allow for more extensive analyses. Program-level digital recontact via email or through the participant portal occurred during the pilot period for all eligible participants, including controls: a recontact campaign for the genetic return of results and another All of Us survey, the COVID-19 Participant Experience Survey [18], with additional email reminders.
The telephone appointment arm included staff members who made telephone calls to participants who had not completed the surveys, relayed the importance of survey completion, and offered to schedule appointments for these participants to complete the surveys over the phone. The telephone appointment not only reminded individuals about the survey but also offered them the opportunity to fill the surveys out over the phone, like computer assisted telephonic interviews (CATI), which may be more convenient for them. CATI refers to a method of conducting surveys or interviews over the telephone using a computer program to assist interviewers in the process. The software typically helps with tasks such as questionnaire administration, data entry, and management, increasing the efficiency and accuracy of the interview process [19].
The postal arm used the U.S. Postal Service to send reminders to participants to complete their surveys. These reminders consisted of a compelling call-to-action letter and survey instruction brochure that reminded participants of the login procedures for the participant portal.
The combination arm consisted of an initial contact via a phone call followed by a postal mailing if the participant had not completed a survey within thirty days of the initial call. The initial calls were made at times when staff were able, and not at a specific time period during the pilot implementation. The postal mailing included the Introductory letter and the Survey Instruction Brochure to facilitate portal access for survey completion.
Data collection
The sites implemented their chosen strategy for at least two months, beginning on or after April 27, 2020. Participating sites received training in the study protocol and data entry procedures. Data for participants were entered into a Research Electronic Data Capture (REDCap) [20] instrument from the sites and obtained from the central Data Research Center’s Raw Data Repository, which serves as a central repository for All of Us participant survey data. These anonymized collected data were accessed retrospectively for research purposes starting on August 10, 2020.
Study data collected through REDCap included participant ID, number of contacts made to reach participants, appointments scheduled and kept, and if the mail was returned undelivered. Over 100 staff members at study sites entered and monitored their site’s study data. Staff used multiple methods for entering data into REDCap, including importing eligible participant identifiers, entering individual participant records manually, and importing all of their data into the central REDCap repository using a local instance of REDCap that included the project data collection instruments.
Data from the Data Research Center’s Raw Data Repository included survey completion with the completion date, demographic, and other covariate data for each participant. The demographic data included race and ethnicity, age, sexual orientation and gender, educational attainment, geography (e.g., in a non-urban or urban area), and household income. These demographic data, except for geography, are all self-reported. We compared the results for participants historically represented in biomedical research to those historically underrepresented in biomedical research. The Raw Data Repository data were linked to REDCap data using the participant identification number.
The study data were examined to ensure that all site participant records met the study’s eligibility criteria.
Statistical analysis
The study analysis aimed to determine the following: (1) whether completion rates of at least one follow-up survey increased in one or more study arms; (2) whether at least one follow-up survey completion rates increased more in the telephone appointment arm than the postal arm; and (3) whether at least one follow-up survey completion rates increased more in the combination arm than the other two arms.
The primary outcome of interest is whether participants completed at least one additional follow-up survey during the pilot period. For example, if a participant completed three baseline surveys before the pilot period and then completed one or more follow-up surveys during the pilot period, they were considered to complete additional follow-up surveys. Incremental completion rates were presented for the three study arms and one control arm across three periods (before pilot, during pilot, and two months after pilot), with 95% confidence intervals constructed using the binomial distribution. Note that the completion rates are zeros before pilot by definition. Given the All of Us program’s emphasis on enrolling underrepresented populations, we also stratified the incremental completion rates by underrepresented (UBR) and represented (RBR) groups [14].
This study is observational with interventions chosen by sites. It is crucial to control for confounding bias in the analysis. We utilized a propensity score (PS) model, which is a logistic regression model for the binary indicator of whether participants received any interventions. Covariates included UBR criteria such as age, sex and gender minority, income, education, geography, race, and ethnicity, as well as the number of missing follow-up surveys. PS is defined as the probability of intervention assignment conditional on observed baseline characteristics. We compared the PS distribution between intervention arms and the control arm, assessing the overlap of their ranges. As shown in Table 1 and Fig 2, the intervention and control arms are not balanced, and their PS distributions differ, though their PS ranges overlap well.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Various approaches have been proposed in the literature to address confounder bias, including PS matching, PS weighting, PS stratification, and multivariable regression models [21]. The first two estimate the marginal causal effect, while the latter two estimate the conditional treatment effect. For this study, we used multivariable regression models based on the following considerations. (1) Preliminary evidence: Descriptive statistics showed different intervention effects between UBR and RBR groups, highlighting the need to study interactions between intervention arms and characteristics included in the UBR definition variables; (2) Future impact: Moderator effects of participant characteristics are important for future personalized interventions; (3) Available methodology: PS matching, weighting, or stratification were not designed to study interactions between confounders and treatment.
Therefore, we used mixed effects logistic regression model to examine intervention outcomes relative to different covariates and explore interactions to identify the intervention outcomes and effects of select demographic variables. The multivariable regression models included a random intercept for enrollment site to control for site’s cluster effect and adjusted for covariates that could have affected the completion of follow-up surveys. These models compared results between each of the intervention and control groups. These covariates included the demographic variables listed above, intervention study arm, Social Security number question missingness and not providing an email address (as proxies for participant engagement), health literacy score from the Brief Health Literacy Scale (which is part of the All of Us Overall Health Survey), time of enrollment since All of Us initiation (in weeks), number of missing follow-up surveys, and enrollment site size (total number of participants at the site).
We performed additional interaction analyses in the regression models to compare the effects of the different study arms and the following five demographic categories on completion of follow-up surveys with completion rates of the follow-up surveys in the controls: 1) self-reported race/ethnicity, 2) sex and gender, 3) education, 4) geography, and 5) income. We also analyzed completion rates among participants of different age groups, education levels, and racial and ethnic backgrounds.
Statistical analyses were conducted using lme4, forestploter, sjPlot, emmeans, survey, dplyr and ggplot2 packages in R software (version 4.4.0) with significance defined as two-sided P < .05.
Results
Descriptive analysis
A total of 50 sites participated. The final study sample included 17,593 participants in the intervention group and 47,832 in the control group (Table 2). Among those in the intervention group, 6,253 participants in the telephone appointment arm, 5,940 in the postal arm and 5,400 in the combination arm. The records of 692 participants who did not meet the study’s eligibility criteria or who had withdrawn from All of Us were excluded from the analysis.
[Figure omitted. See PDF.]
Of the total number of participants (65,425) in the intervention and control groups, 20,427 (30.9%) were White, 1,628 (2.4%) Asian, 25,615 (39.2%) Black or African American, 11,114 (17%) Hispanic, Latino or Spanish, and 6,641 (10.1%) did not provide racial and ethnic information. There were 10,056 (15.4%) participants had less than high school education, 16,947 (25.9%) with high school education, 16,307 (24.9%) had some college education, 19,384 (29.6%) with a college degree, 2,731 (4.1%) did not provide education information. 23,139 (35.4%) participants were between age 18–45, 29,466 (45%) between age 45–65, 9272 (14.2%) between age 65–75 and 3,548 (5.4%) participants older than 75. 6,828 (10.4%) completed any of the follow-up surveys, and of these, 6,725 (98.5%) completed all three follow-up surveys.
The follow-up survey completion rate was 24% higher in the telephone appointment arm than in the control group. In addition, there was a ~10% increase in the completion of follow-up surveys among participants in both intervention and control groups (Fig 3). More participants from historically represented groups completed follow-up surveys, regardless of study arm or being a control than participants from historically underrepresented groups (Fig 4).
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Multivariable analysis
When we controlled for covariates (Fig 5), the telephone appointment and combination arms were both more likely to complete follow-up surveys than the control arm during the study period in the regression (odds ratio [OR], 2.01; 95% Confidence Interval (CI), 1.81–2.23 for the telephone appointment arm and OR, 1.91; 95% CI, 1.66–2.20 for the combination arm). The postal arm was not significantly different from the control arm (OR, 0.92; 95% CI, 0.79–1.07).
[Figure omitted. See PDF.]
When we considered specific ages, the predicted probability of follow-up survey completion was higher for participants between the ages of 60 and 70 (Fig 6).
[Figure omitted. See PDF.]
For the interaction analysis, the effect of the interventions on the follow-up survey completion rate did not differ significantly by sexual and gender minority status or income. However, the different interventions had different effects on members of different racial and ethnic, education and geography groups (Fig 7A–7C). The telephone appointment arm had higher follow-up survey completion rates than other arms for all racial and ethinic groups, the postal arm had lower completion rates among Black and African American participants (Fig 7A). Increases in follow-up survey completion rates were more significant among participants who had less than a high school education, high school education and some college education in the combination arm, but survey completion rates increased more among those with a college degree in the telephone appointment arm (Fig 7B). The telephone appointment arm was more effective among participants living in areas with rural and non-rural ZIP codes (Fig 7C). The postal arm was not as effective as the other two arms among participants from rural areas.
[Figure omitted. See PDF.]
Differences in completion of surveys in the pilot arms as compared to controls for race/ethnicity are in (a), the highest level of education attained is in (b), and Geography (urban vs. rural) is in (c).
Discussion
Our study reports the effectiveness of three different retention strategies with over 65,000 participants, primarily from diverse populations underrepresented in biomedical research within All of Us. While the three interventions and control arm increased the rate of follow-up surveys, the telephone appointment intervention demonstrated the largest increase in bivariate analyses with 24% higher completion than before the pilot study began. When we controlled for covariates, both the telephone appointment and combination arms were similarly effective at increasing completion rates compared to the control arm. The difference between bivariate and multivariable results is likely due to the confounders for the pilot arms and control (Table 1). However, the difference between completion rate increases for the follow-up surveys between the postal and control arms was not statistically significant.
A systematic review by Booker et al. [22] of retention methods in population-based cohort studies described strategies, such as reminders, repeat contacts (phone or mail), and repeated mailings of questionnaires were reviewed for 17 studies showing increases in survey response rates of 2%–37%. Higher numbers of reminder letters or postcards led to higher response rates; one study showed increased response rates for mailing a second questionnaire compared with a postcard. Using multiple retention methods including phone calls increased survey response rates by more than 70%. Our results showed increases of up to 24% higher with phone calls. Our lower rates of improvement could be because of the scale of the program or the population that may be more challenging to retain.
Some of the interventions appeared to be more effective in different populations. The telephone appointment intervention was more effective among rural and those with at least an undergraduate college degree than other arms. In contrast, the combination intervention was more effective among those with less than college education. There may be multiple reasons for these differences, in addition to differences in populations. Sites could have spent additional time completing phone calls, whereas, in the combination arm, sites also had to implement a postal component and might not have had as much time to complete additional phone follow-ups. Site bias was also possible where sites could not afford to add postal mailing, chose another arm, or had better connections with their lower education participants leading to higher completion. Differential budget support for retention activities may have contributed to these findings. Some participant characteristics had significant effects on other potential confounders in both intervention and control arms, such as skipping the Social Security number question, not providing an email address, having low health literacy, having a certain age or a particular race or ethnicity, and having a certain level of education. These study findings suggest that All of Us should develop and adopt a precision retention framework to strategically direct retention strategies toward participants who would most benefit from that retention strategy, such as phone appointment interventions for African American participants.
Although some of the interventions were particularly effective in certain groups, for other groups, the interventions were no more effective than the control arm. For example, the effects of the different interventions did not differ significantly from that of the control arm among participants with sexual and gender minority status or a lower household income. This could be because of the digital recontact in the control arm. If sites focused on fewer participants who would be less likely to complete additional survey modules because they were less digitally literate, it is possible that more of the digitally literate participants might have completed follow-up surveys in the control arm. The postal intervention was generally less effective than the other two interventions among the historically underrepresented biomedical research populations.
Although the study’s interventions helped increase follow-up survey completion rates, All of Us still has significant follow-up survey completion challenges. Our pilot interventions align with the literature on survey methods indicating that single-mode survey methods are far less effective than multimode methods [3,7–12] and increased completion rates by up to 25%. In a systematic review by Booker et al. [22] multiple retention methods, including face-to-face interviews, increased the survey completion rate by more than 70%. Future pilot interventions that include face-to-face interviews could improve rates more significantly. Another systematic review by Edwards et al. [23] identified in the 49 trials evaluating the impact of monetary incentives, the odds of responding to traditional mailed surveys doubled when participants received remuneration. All of Us partners have begun to pilot incentives to engage participants in follow-up surveys. These incentives varied and included five dollars per follow-up survey completed, thirty dollars for completion of all three follow-up surveys within a four-week period, and different amounts given to complete them within certain timeframes.
Anecdotally, we learned that participants may have experienced challenges when attempting to complete outstanding follow-up surveys using a digital-only strategy for study reminders, including technical challenges that were participant-based or related to the participant portal. Digital-only communication strategies in large, diverse populations traditionally underrepresented in biomedical research are insufficient to motivate survey completion. All of Us conducted a year-long pilot using Computer-Assisted Telephone Interviews (CATI) as an additional modality to facilitate survey completion among participants [19]. Findings showed that use of CATI increased survey completion for the All of Us. Given the evidence of interest and success, especially among UBR participants, the Program has incorporated CATI as an evidence-based, multimodal strategy for survey completion.
Limitations caution interpretation of this study. First, the study did not randomly assign participants to intervention or control arms (sites chose the intervention arm and which participants to contact as part of the study). We used propensity scores to demonstrate that after adjusting for the covariates we evaluated, the likelihood of assigning to the intervention and control groups overlapped well. Using a randomized controlled design within this real-world context was not feasible because sites had to continue implementing other retention activities during the study period to meet All of Us requirements. However, future evaluation of retention strategies will benefit from random assignment. Second, sites could have implemented other retention interventions simultaneously with the study interventions. We described standard operating procedures for these interventions, and sites agreed to hold on to other retention activities during the study period; however, we could not confirm if this was the case. Third, selection bias may be present as some sites opted to participate in the study while others did not. Fourth, we assumed that REDCap entries included records of all attempted participant contacts and that follow-up survey completion was recorded regardless of whether contact attempts were successful. Since this study used an intent-to-treat analysis, the results could have led to a conservative estimate of the actual effect. Finally, other potential confounders, including digital literacy level, access to digital technologies, and specific site characteristics, could not be explored in these analyses. Analyzing these data could lead to further insights into the effect of these strategies.
The lessons learned from this study about retention interventions and improvement in follow-up survey completion rates provide generalizable knowledge for other similar cohort studies and demonstrate the potential value of precision reminders and engagement with sub-populations of a cohort. All of Us will use the findings from this pilot study to improve retention approaches and to offer tested strategies, including systematic tracking and site-level adaptations, to All of Us consortium members. In addition, the lessons learned from the real-world implementation of the study interventions (including lessons about costs, staffing, and other resources needed) from this pilot study will contribute to the design of future enrollment and retention pilot interventions for All of Us and other similar cohort studies. All of Us will also need to assess the generalizability, scalability, and cost of each retention strategy and place those factors into the context of their effect on follow-up survey completion rates. Other sites or other sizeable longitudinal cohort programs that choose to adopt one of the three interventions used in this study could assess if they have similar results to those in the study and which factors contribute to the success or failure of these interventions in local contexts.
Acknowledgments
We wish to thank our participants who have joined All of Us and contributed to the surveys, helped refine early materials, engaged in developing and evaluating the surveys, and provided other ongoing feedback. We thank the countless co-investigators and staff across all awardees and partners, without which All of Us would not have achieved our current goals.
We also thank the following NIH staff who provided their expertise in the pilot analysis: Chris Foster, Sarra Hedden, and Tamara Litwin. In addition, we would like to recognize the contributions of the project planning team of All of Us investigators, site managers, and NIH staff in designing pilot strategies and the standardized protocol and data collection before implementation.
All of Us Survey Committee Members: James McClain, Brian Ahmedani, Michael Manganiello, Kathy Mazor, Heather Sansbury, Alvaro Alonso, Sarra Hedden, Randy Bloom
Pilot Core at the DRC: Cassie Springer, Ashley Able, Ryan Hale, and Lina Suileman
We also wish to thank All of Us Research Program Director Josh Denny, Holly Garriock, Stephanie Devaney, and our partners Vibrent, Scripps, and Leidos.
"Precision Medicine Initiative, PMI, All of Us, the All of Us logo, and The Future of Health Begins with You are service marks of the US Department of Health and Human Services."
References
1. 1. Eysenbach G. The law of attrition. J Med Internet Res. 2005;7(1):e11. Epub 2005/04/15. pmid:15829473; PubMed Central PMCID: PMC1550631.
* View Article
* PubMed/NCBI
* Google Scholar
2. 2. Teague S, Youssef GJ, Macdonald JA, Sciberras E, Shatte A, Fuller-Tyszkiewicz M, et al. Retention strategies in longitudinal cohort studies: a systematic review and meta-analysis. BMC Med Res Methodol. 2018;18(1):151. Epub 2018/11/28. pmid:30477443; PubMed Central PMCID: PMC6258319.
* View Article
* PubMed/NCBI
* Google Scholar
3. 3. Couper MP, Peytchev A, Strecher VJ, Rothert K, Anderson J. Following up nonrespondents to an online weight management intervention: randomized trial comparing mail versus telephone. J Med Internet Res. 2007;9(2):e16. Epub 2007/06/15. pmid:17567564; PubMed Central PMCID: PMC1913938.
* View Article
* PubMed/NCBI
* Google Scholar
4. 4. Shih T-H, Fan X. Comparing response rates from web and mail surveys: A meta-analysis. Field methods. 2008;20(3):249–71.
* View Article
* Google Scholar
5. 5. Bertrand KA, Gerlovin H, Bethea TN, Palmer JR. Pubertal growth and adult height in relation to breast cancer risk in African American women. Int J Cancer. 2017;141(12):2462–70. Epub 2017/08/29. pmid:28845597; PubMed Central PMCID: PMC5654671.
* View Article
* PubMed/NCBI
* Google Scholar
6. 6. Blumenberg C, Barros AJD. Response rate differences between web and alternative data collection methods for public health research: a systematic review of the literature. Int J Public Health. 2018;63(6):765–73. Epub 2018/04/25. pmid:29691594.
* View Article
* PubMed/NCBI
* Google Scholar
7. 7. Dillman DA, Smyth JD, Christian LM. Internet, phone, mail, and mixed-mode surveys: The tailored design method: John Wiley & Sons; 2014.
8. 8. Couper MP. New developments in survey data collection. Annual Review of Sociology. 2017;43:121–45.
* View Article
* Google Scholar
9. 9. Dillman DA. The promise and challenge of pushing respondents to the web in mixed-mode surveys: Statistics Canada; 2017.
10. 10. DeLeeuw ED, editor Mixed-mode: Past, present, and future. Survey Research Methods; 2018.
11. 11. Freedman VA, McGonagle KA, Couper MP. Use of a targeted sequential mixed mode protocol in a nationally representative panel study. Journal of Survey Statistics and Methodology. 2018;6(1):98–121. pmid:29607348
* View Article
* PubMed/NCBI
* Google Scholar
12. 12. Patrick ME, Couper MP, Laetz VB, Schulenberg JE, O’Malley PM, Johnston LD, et al. A sequential mixed-mode experiment in the US National Monitoring the Future Study. Journal of survey statistics and methodology. 2018;6(1):72–97.
* View Article
* Google Scholar
13. 13. Denny JC, Rutter JL, Goldstein DB, Philippakis A, Smoller JW, Jenkins G, et al. The "All of Us" Research Program. N Engl J Med. 2019;381(7):668–76. Epub 2019/08/15. pmid:31412182.
* View Article
* PubMed/NCBI
* Google Scholar
14. 14. Mapes BM, Foster C.S., Kusnoor, S.V., Epelbaum, M.I., AuYoung M., Jenkins G., et al. Diversity and Inclusion for the All of Us Research Program: A Scoping Review. In: RM C, editor. 2020.
15. 15. Cronin RM, Jerome RN, Mapes B, Andrade R, Johnston R, Ayala J, et al. Development of the Initial Surveys for the All of Us Research Program. Epidemiology. 2019;30(4):597–608. Epub 2019/05/03. pmid:31045611; PubMed Central PMCID: PMC6548672.
* View Article
* PubMed/NCBI
* Google Scholar
16. 16. Kongsved SM, Basnov M, Holm-Christensen K, Hjollund NH. Response rate and completeness of questionnaires: a randomized study of Internet versus paper-and-pencil versions. J Med Internet Res. 2007;9(3):e25. Epub 2007/10/19. pmid:17942387; PubMed Central PMCID: PMC2047288.
* View Article
* PubMed/NCBI
* Google Scholar
17. 17. De Leeuw D. To mix or not to mix data collection modes in surveys. Journal of official statistics. 2005;21(2):233.
* View Article
* Google Scholar
18. 18. Hub AoUR. Participant Surveys: COVID-19 Participant Experience 2022 [cited 2022 09/12/2022]. Available from: https://www.researchallofus.org/data-tools/survey-explorer/cope-survey/.
19. 19. Peterson R, Hedden SL, Seo I, Palacios VY, Clark EC, Begale M, et al. Rethinking Data Collection Methods During the Pandemic: Development and Implementation of CATI for the All of Us Research Program. J Public Health Manag Pract. 2024;30(2):195–9. pmid:38271102; PubMed Central PMCID: PMC10827348.
* View Article
* PubMed/NCBI
* Google Scholar
20. 20. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81. Epub 2008/10/22. pmid:18929686; PubMed Central PMCID: PMC2700030.
* View Article
* PubMed/NCBI
* Google Scholar
21. 21. Austin PC. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate behavioral research. 2011;46(3):399–424. pmid:21818162
* View Article
* PubMed/NCBI
* Google Scholar
22. 22. Booker CL, Harding S, Benzeval M. A systematic review of the effect of retention methods in population-based cohort studies. BMC Public Health. 2011;11:249. Epub 2011/04/21. pmid:21504610; PubMed Central PMCID: PMC3103452.
* View Article
* PubMed/NCBI
* Google Scholar
23. 23. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. Bmj. 2002;324(7347):1183. Epub 2002/05/23. pmid:12016181; PubMed Central PMCID: PMC111107.
* View Article
* PubMed/NCBI
* Google Scholar
Citation: Cronin RM, Feng X, Able A, Sutherland S, Givens B, Johnston R, et al. (2024) Improving follow-up survey completion rates through pilot interventions in the All of Us Research Program: Results from a non-randomized intervention study. PLoS ONE 19(10): e0308995. https://doi.org/10.1371/journal.pone.0308995
About the Authors:
Robert M. Cronin
Roles: Writing – original draft
E-mail: [email protected]
Affiliation: Department of Internal Medicine, The Ohio State University, Columbus, OH, United States of America
ORICD: https://orcid.org/0000-0003-1916-6521
Xiaoke Feng
Roles: Data curation, Visualization
Affiliation: Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, United States of America
ORICD: https://orcid.org/0000-0001-6108-9959
Ashley Able
Roles: Writing – review & editing
Affiliation: Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, United States of America
Scott Sutherland
Roles: Data curation, Writing – review & editing
Affiliation: Vibrent Health, Fairfax, VA, United States of America
Ben Givens
Roles: Data curation, Writing – review & editing
Affiliation: Vibrent Health, Fairfax, VA, United States of America
Rebecca Johnston
Roles: Writing – review & editing
Affiliation: Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, United States of America
Charlene Depry
Roles: Writing – review & editing
Affiliation: All of Us Research Program, Office of the Director, National Institutes of Health, Bethesda, MD, United States of America
Katrina W. Le Blanc
Roles: Writing – review & editing
Affiliation: All of Us Research Program, Office of the Director, National Institutes of Health, Bethesda, MD, United States of America
Orlane Caro
Roles: Writing – review & editing
Affiliation: Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, United States of America
Brandy Mapes
Roles: Writing – original draft, Writing – review & editing
Affiliation: Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, United States of America
Josh Denny
Roles: Writing – original draft, Writing – review & editing
Affiliation: All of Us Research Program, Office of the Director, National Institutes of Health, Bethesda, MD, United States of America
ORICD: https://orcid.org/0000-0002-3049-7332
Mick P. Couper
Roles: Writing – original draft, Writing – review & editing
Affiliations: Survey Research Center, University of Michigan, Ann Arbor, MI, United States of America, Department of Biostatistics, School of Public Health, University of Michigan, Ann Arbor, MI, United States of America, Institute for Social Research, Survey Research Center, University of Michigan, Ann Arbor, MI United States of America
Qingxia Chen
Roles: Writing – original draft, Writing – review & editing
Affiliations: Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, United States of America, Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States of America, Vanderbilt Eye Institute, Vanderbilt University Medical Center, Nashville, TN, United States of America
Irene Prabhu Das
Roles: Writing – original draft, Writing – review & editing
Affiliation: All of Us Research Program, Office of the Director, National Institutes of Health, Bethesda, MD, United States of America
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
1. Eysenbach G. The law of attrition. J Med Internet Res. 2005;7(1):e11. Epub 2005/04/15. pmid:15829473; PubMed Central PMCID: PMC1550631.
2. Teague S, Youssef GJ, Macdonald JA, Sciberras E, Shatte A, Fuller-Tyszkiewicz M, et al. Retention strategies in longitudinal cohort studies: a systematic review and meta-analysis. BMC Med Res Methodol. 2018;18(1):151. Epub 2018/11/28. pmid:30477443; PubMed Central PMCID: PMC6258319.
3. Couper MP, Peytchev A, Strecher VJ, Rothert K, Anderson J. Following up nonrespondents to an online weight management intervention: randomized trial comparing mail versus telephone. J Med Internet Res. 2007;9(2):e16. Epub 2007/06/15. pmid:17567564; PubMed Central PMCID: PMC1913938.
4. Shih T-H, Fan X. Comparing response rates from web and mail surveys: A meta-analysis. Field methods. 2008;20(3):249–71.
5. Bertrand KA, Gerlovin H, Bethea TN, Palmer JR. Pubertal growth and adult height in relation to breast cancer risk in African American women. Int J Cancer. 2017;141(12):2462–70. Epub 2017/08/29. pmid:28845597; PubMed Central PMCID: PMC5654671.
6. Blumenberg C, Barros AJD. Response rate differences between web and alternative data collection methods for public health research: a systematic review of the literature. Int J Public Health. 2018;63(6):765–73. Epub 2018/04/25. pmid:29691594.
7. Dillman DA, Smyth JD, Christian LM. Internet, phone, mail, and mixed-mode surveys: The tailored design method: John Wiley & Sons; 2014.
8. Couper MP. New developments in survey data collection. Annual Review of Sociology. 2017;43:121–45.
9. Dillman DA. The promise and challenge of pushing respondents to the web in mixed-mode surveys: Statistics Canada; 2017.
10. DeLeeuw ED, editor Mixed-mode: Past, present, and future. Survey Research Methods; 2018.
11. Freedman VA, McGonagle KA, Couper MP. Use of a targeted sequential mixed mode protocol in a nationally representative panel study. Journal of Survey Statistics and Methodology. 2018;6(1):98–121. pmid:29607348
12. Patrick ME, Couper MP, Laetz VB, Schulenberg JE, O’Malley PM, Johnston LD, et al. A sequential mixed-mode experiment in the US National Monitoring the Future Study. Journal of survey statistics and methodology. 2018;6(1):72–97.
13. Denny JC, Rutter JL, Goldstein DB, Philippakis A, Smoller JW, Jenkins G, et al. The "All of Us" Research Program. N Engl J Med. 2019;381(7):668–76. Epub 2019/08/15. pmid:31412182.
14. Mapes BM, Foster C.S., Kusnoor, S.V., Epelbaum, M.I., AuYoung M., Jenkins G., et al. Diversity and Inclusion for the All of Us Research Program: A Scoping Review. In: RM C, editor. 2020.
15. Cronin RM, Jerome RN, Mapes B, Andrade R, Johnston R, Ayala J, et al. Development of the Initial Surveys for the All of Us Research Program. Epidemiology. 2019;30(4):597–608. Epub 2019/05/03. pmid:31045611; PubMed Central PMCID: PMC6548672.
16. Kongsved SM, Basnov M, Holm-Christensen K, Hjollund NH. Response rate and completeness of questionnaires: a randomized study of Internet versus paper-and-pencil versions. J Med Internet Res. 2007;9(3):e25. Epub 2007/10/19. pmid:17942387; PubMed Central PMCID: PMC2047288.
17. De Leeuw D. To mix or not to mix data collection modes in surveys. Journal of official statistics. 2005;21(2):233.
18. Hub AoUR. Participant Surveys: COVID-19 Participant Experience 2022 [cited 2022 09/12/2022]. Available from: https://www.researchallofus.org/data-tools/survey-explorer/cope-survey/.
19. Peterson R, Hedden SL, Seo I, Palacios VY, Clark EC, Begale M, et al. Rethinking Data Collection Methods During the Pandemic: Development and Implementation of CATI for the All of Us Research Program. J Public Health Manag Pract. 2024;30(2):195–9. pmid:38271102; PubMed Central PMCID: PMC10827348.
20. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81. Epub 2008/10/22. pmid:18929686; PubMed Central PMCID: PMC2700030.
21. Austin PC. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate behavioral research. 2011;46(3):399–424. pmid:21818162
22. Booker CL, Harding S, Benzeval M. A systematic review of the effect of retention methods in population-based cohort studies. BMC Public Health. 2011;11:249. Epub 2011/04/21. pmid:21504610; PubMed Central PMCID: PMC3103452.
23. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. Bmj. 2002;324(7347):1183. Epub 2002/05/23. pmid:12016181; PubMed Central PMCID: PMC111107.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication: https://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Objective
Retention to complete follow-up surveys in extensive longitudinal epidemiological cohort studies is vital yet challenging. All of Us developed pilot interventions to improve response rates for follow-up surveys.
Study design and setting
The pilot interventions occurred from April 27, 2020, to August 3, 2020. The three arms were: (1) telephone appointment [staff members calling participants offering appointments to complete surveys over phone] (2) postal [mail reminder to complete surveys through U.S. Postal Service], and (3) combination of telephone appointment and postal. Controls received digital-only reminders [program-level digital recontact via email or through the participant portal]. Study sites chose their study arm and participants were not randomized.
Results
A total of 50 sites piloted interventions with 17,593 participants, while 47,832 participants comprised controls during the same period. Of all participants, 6,828 (10.4%) completed any follow-up surveys (1448: telephone; 522: postal; 486: combination; 4372: controls). Follow-up survey completions were 24% higher in the telephone appointment arm than in controls in bivariate analyses. When controlling for confounders, telephone appointment and combination arms increased rates of completion similarly compared to controls, while the postal arm had no significant effect (odds ratio [95% Confidence Interval], telephone appointment:2.01[1.81–2.23]; combination:1.91[1.66–2.20]; postal:0.92[0.79–1.07]). Although the effects of the telephone appointment and combination arms were similar, differential effects were observed across sub-populations.
Conclusion
Telephone appointments appeared to be the most successful intervention in our study. Lessons learned about retention interventions, and improvement in follow-up survey completion rates provide generalizable knowledge for similar cohort studies and demonstrate the potential value of precision reminders and engagement with sub-populations of a cohort.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer