Content area
The objective of this study was to identify patient preferences for outpatient diagnostic imaging services and analyze how patients make trade-offs between attributes of these services using a discrete choice experiment (DCE). We used a DCE with 14 choice questions asking which imaging locations patients would prefer. We used latent class analysis to analyze preference heterogeneity between different patient groups and to estimate the relative value they assign to different attributes of imaging services. Our analysis showed that the “Experienced Patients” subgroup generally value diagnostic imaging services in both acute and chronic situations and had a strong preference for hospital outpatient radiology departments (HORD) that would provide services at lower costs, where their images would be interpreted by a specialty radiologist, the clinic would be recommended by their PCP, online scheduling would be available, service rating were higher, and travel and wait times would be shorter. New Patients significantly valued the service rating of the (HORD and online scheduling. HORDs can be more competitive by providing services that live up to expectations better than available retail radiology clinics (RRCs). Most RRCs do not currently offer online scheduling so ease of use may also steer patients towards HORDs. HORDs have the advantage of being linked to the main medical center which has the reputation of having clinical expertise and more sophisticated technology. We conclude that there is room for medical centers to build HORDs that provide an appealing and competitive alternative to current RRC.
Introduction
An important challenge for many hospitals and clinics is addressing the escalating diagnostic imaging demands for hospital outpatient radiology departments (HORDs) [1]. HORDs have struggled to keep up with the increasing need for imaging, with MRI and CT imaging rates for adults more than doubling from 2006 to 2016 [2]. This escalating demand for services, coupled with an increase in patient complexity not only leads to increased costs, but also reduced accessibility and quality. Delays in imaging at HORDs can also lead to delayed care delivery, which can negatively impact patient outcomes.
Retail radiology clinics (RRCs) emerged from the general retail health clinics’ market boom as a potential solution to the imaging bottleneck effect of HORDs. RRC offer a variety of diagnostic imaging services, typically at lower cost, at a decentralized clinic [3–5]. RRCs have gained popularity by cutting wait-times, increasing accessibility, and reducing costs [3–5]. Despite the increase in RRC utilization, price variability and unreliable quality of the technology used, imaging interpretation and reporting of images have led RRCs to be labeled as low-value substitutes to HORD services [6–8]. For example, one study found significant variability in diagnostic reports from 10 different RRCs performing an MRI scan of the lower back on the same patient within a 3-week period [5]. Another study reported that radiology departments in tertiary care centers are frequently asked to perform secondary interpretations of imaging studies, finding that discrepancy rates vary widely [6].
In parallel, hospital networks are struggling with care integration and coordination, as well as maintaining quality standards across all departments and campuses within their network. Some hospital systems are now looking at options to develop their own RRCs to provide convenient and efficient locations, without sacrificing the technological or interpretation quality, while leveraging the existing trust patients may have with their hospital system [3–4]. This is a potentially valuable innovation for patients, as they would be able to receive the “specialty read” quality at locations with shorter wait times and lower out-of-pocket costs.
It remains unclear, however, what aspects of imaging services patients value the most. In the push for patient-centeredness, the Centers for Medicare and Medicaid Services (CMS) recommends hospital departments incorporate patient insight into their service quality measures [7]. Despite widespread encouragement to incorporate the patient voice in health delivery decisions, developments such as RRCs are usually done without meaningful input from patients [8]. Existing research on patient preferences in diagnostic imaging services is sparse and what exists is often contradictory [9–15].
The few studies that did investigate imaging preferences focused largely on reporting methods. One study looked at patient preferences for CT and MRI imaging results and focused on how, from whom, and how soon patients preferred the results [10]. The authors used a survey and asked those questions directly of patients. They found that patients wanted their results to be communicated much sooner than is currently practiced. Another study focused on receiving radiology results and reviewing the images and findings directly with a radiologist after completion of an examination [11]. Patients preferred hearing examination results from both their ordering provider and the interpreting radiologist [12]. Other than results communication, there was a study focused on the referral mechanism [13] and the importance of wait time for test results [14]. None of these studies focused on patient preferences for where to seek imaging services, including practical considerations such as parking and logistics. There have been studies focused on preferences for primary care services [15] and preferences for health service delivery in general [16], but this study will specifically focus on preferences for imaging services.
The objective of this study was to identify patient preferences for outpatient diagnostic imaging services and analyze how patients make trade-offs between attributes of these services using a stated choice experiment. In this study, we analyze patient’s individual preference heterogeneity for these services and study how preferences vary among and within different patient populations.
Methods
Qualitative study
To elicit stated preferences for imaging services, we designed a Discrete Choice Experiment (DCE) which allows researchers to analyze the trade-offs that patients are willing to make, including options that may not currently exist but could in the future [17]. Before the experimental design, we needed to explore which attributes of imaging services matter to patients. We therefore conducted focus groups among patients at a middle-size academic medical center who volunteered to share their experiences with imaging services. (henceforth referred to as the medical center). Inclusion criteria consisted of 1) adults over 18 years who 2) have received outpatient radiology imaging services in the last year. Exclusion criteria consisted of 1) persons under the age of 18 with 2) inpatient status or 3) not having received outpatient radiology services in the last year. Participants were fully informed and gave their consent by participation in the focus group and demographic surveys. We recruited from a pre-existing group of patient advisors to the medical center. Prior to the session, participants were provided a detailed information sheet on the project. Compensation for participation included a meal and parking vouchers.
Two focus groups with 12 participants total were conducted in a semi-structured manner lasting between 90 and 120 minutes. Group sizes were intended to fall between 4 and 8 people in order to stimulate a conducive group conversation without the risk of too many voices for individual experiences to be shared [18]. A trained member of the project moderated the focus groups using a question guide featuring questions aimed at understanding how patients perceive their radiology experiences led participants through their focus group session. Questions were intended to be exploratory and sometimes followed by probes to allow differences between patient insights and experiences to emerge. All sessions were audio recorded with participants’ informed consent. Member checking assessed the credibility of responses, whereby the moderator paraphrased their interpretation of an ambiguous response followed by participants’ confirmation or rephrasing [19]. Follow-up probes asking for more detail or specific examples were also used. The demographic characteristics of the participants can be found in the Appendix 2.
We transcribed the focus groups following their completion. The transcriptions were later analyzed using ATLAS.ti version 8 qualitative analytic software, following thematic analysis technique of phase 1) familiarization with the data through reading, re-reading and noting initial ideas, 2) generation of initial codes, 3) searching for potential themes and sequestering all relevant data to the theme, 4) reviewing themes and generating a thematic map, and lastly 5) refining and specifying themes before producing the final report [20]. Themes are reported based on their frequency within groups and across groups, and intensity at which themes were discussed. The frequencies of the participant-identified attributes discussed within and across focus groups are outlined in Table 1 to illustrate how the attributes in the DCE were defined.
[Figure omitted. See PDF.]
Discrete choice experiment
Following the focus groups, we designed a DCE in which patients were asked to choose the imaging clinic they preferred. Each choice task had three different clinics which varied by 10 different attributes, which were determined by the themes identified in the focus groups. Four attributes had 2 levels, and six attributes had 3 levels. The levels were partly defined on concrete contributions from patients in the focus groups and partly on information from the medical center around realistic real-life levels.
Table 2 shows the attributes and levels used in the DCE. Following the analysis of the focus groups, we included the following attributes: whether the interpreting radiologist is a general or sub-specialty radiologist; whether the clinic was recommended by their primary care physician (PCP); time to results; out of pocket cost; wait time to an appointment; travel time to the clinic; parking costs; parking accessibility; service; and whether or not online scheduling is available. Service is a multifactorial attribute (e.g., staff attentiveness and facility amenities) combined into a star rating. The rating scale is between one and five stars, with a five-star rating representing an excellent service and a one-star rating suggesting a poor service, as rated by other hypothetical patients. The star ratings are based on the CMS Five-Star Quality Rating System which were created to help consumers, their families, and caregivers compare clinics more easily and to help identify areas about which they may want to ask questions. A rating of 1 or 2 stars means that the clinic’s performance was below the average of other agencies on selected measures; it does not necessarily mean care is poor. A rating of 4 or 5 stars means that the clinic’s performance was above the average of other agencies on selected measures. Costs were “pivoted” around a respondent’s current out of pocket costs: $25 less or $25 more. We used pivot style stated choice data for OOP costs to include a reference alternative whose attributes remain invariant across replications for the same respondent [21].
[Figure omitted. See PDF.]
Participants were asked to imagine a situation where they were hurt and needed imaging services. In the survey, the following descriptions of the choice situation were given based on common reasons for imaging services:
* Situation 1):”For the purpose of this study, suppose you hurt your arm and your primary care provider wants to send you for an X-ray. You have three options of locations where you can have your imaging done.”
* Situation 2):”For the purpose of this study, suppose you hurt your back a while ago and are having persistent pain. Your primary care provider wants to send you for an MRI. You have three options of locations where you can have your imaging done.”
Following the DCE choice tasks, we asked patients attitudinal questions on a 1–5-point Likert scale from “strongly disagree” to “strongly agree”. These questions were borrowed from and validated by the national Medical Expenditures Panel Survey (MEPS) [19], a set of large-scale surveys of families and individuals, their medical providers, and employers across the United States. The questions focused on perceived need for healthcare, need for insurance, risk-aversion, and perceived personal health status.
Experimental design
Once the relevant attributes and levels were chosen, it was desirable to exclude the dominant options, repeats and implausible attribute combinations. There are three ways to reduce the dimensions of the full factorial design matrix to fractional factorial designs: random designs, orthogonal designs, and efficient designs [17]. A design is considered more efficient if it can produce more efficient data in the sense that more reliable parameter estimates can be achieved with an equal or lower sample size [22]. The researcher specifies utility functions that include these “priors”, and these are used to determine the logit probabilities, and the log likelihood functions [23,24] .
Our experimental design of the DCE was based on “prior” estimates of the utilities for attributes of the choice which were calculated by using expectations on what the model parameters will be. These numbers were first based on a midsize medical center’s experience with the length of wait time, parking opportunity, et cetera, based on administrative data and results from the focus groups. We then produced sample data from a pilot which included the first 20 respondents to the survey. We stopped data collection after their responses and estimated preliminary models based on their responses, which we refer to as “prior” estimates. Once the “prior” estimates are established and the utility functions were defined, we used software program NGene to include the specific priors from the initial 20 respondents in the utility functions. The advantage of the Ngene algorithm is that it searches for a list of choice sets in which dominant alternatives do not appear, choice sets are not repeated, and the number of choice sets for which the answer can be inferred from the previous one is minimized.
In this way, 14 choice questions in our survey were enough to derive efficient data regarding the utilities that respondents assigned to the different attributes of imaging services. In some cases, an efficient design includes many choice tasks. A blocking experimental design can then be used to avoid too much of a cognitive burden for the respondent. Blocks are subsets of the choice questions, which are usually equally sized, that contain a limited number of choice questions for each respondent. In those cases, respondents are randomly assigned to a block and answer the choice questions in that block instead of the entire design. In our study, respondents were randomized to either 14 choice questions related to X-ray services, or 14 questions related to MR services. The vignette, which can be found in Appendix 1, was different for the two DCE’s and some of the levels, such as costs, were also different. We did not use overlaps since there was no significant relationship between the different attributes, such as in DCE instruments based on the EQ-5D-5L [25]. An example of a choice question regarding imaging services is shown in Fig 1. We used online research software SurveyEngine for the entire online survey, which can be found in Appendix 3, including the DCE.
[Figure omitted. See PDF.]
Data source, participants and study size estimation
Our data were sampled from an online Centiment panel between April 11, 2021, through November 19, 2021, with a second sample between May 16, 2022, and June 24, 2022. Centiment is a survey company which recruits individuals to answer surveys to generate rewards for themselves or to pledge their earnings to a nonprofit of their choice and it is open to anyone to participate. Centiment has engineered complex systems to manage their respondents and ensure they are providing thoughtful responses. Centiment contacted 472 individuals in the catchment area of a moderate size academic medical center in a rural Northeastern part of the United States. Participants completed written consent before continuing to the online survey questions. Without the online consent, respondents were not able to proceed. The answers were recorded in the data. All data were fully anonymized before shared with the study team. The Institutional Review Board at the University of Vermont reviewed the study and determined it was exempt from full review.
Of the total sample, 268 finished the survey and met initial inclusion criteria (Age >= 18): 134 were assigned to the arm X-ray group, 134 to the back MRI group. We excluded 98 subjects for failing consistency criteria, qualifying for a closed quota, failing a bot-behavior check, or failing an attention (response quality) check, leaving a final sample of 170: 84 in X-ray, 86 in MRI. The quotas that were agreed on used Census data for age by decade, gender, region, race/ethnicity, and income.
The S-efficient design we generated in NGene showed that we needed a minimum of 55 respondents, so our sample size was sufficient. Bots were identified by the RegEx program and manual review of the free text entry boxes. Typically, the bots in our sample entered nonsense or repeated the question’s text into those fields. In addition, we seek to identify fraudulent data by defining a priori indicators that warranted elimination or suspicion; an approach borrowed from another study [26].
For attention, we then checked for consistency and filtered by completion time over a threshold of 7 minutes and filtered any respondents out that showed straight-lining behavior, meaning that a respondent would always pick the same response to the choice questions [27–29]. As each subject answered 14 choice tasks, we obtained an effective sample size of n = 2,380 for modeling. We used NGene 1.2.1 (ChoiceMetrics, 2018) to estimate the minimum sample size required for this study.
Statistical methods
We used a mixed multinomial logit model (MMNL) to estimate the probability of a choice alternative being chosen, depending on the characteristics of the choice (attributes and levels) and the characteristics of the chooser [30–32]. Mixed logit models rely on using continuous statistical distributions to represent unobserved heterogeneity [33]. A mixed logit model accommodates more flexible substitution patterns, and allows for random taste variation, unrestricted substitution patterns, and correlation in unobserved factors [30,31] . Mixed logit models make it feasible to derive individual-specific estimates conditional on the observed individual choices [34].
A different approach is to use discrete rather than continuous distributions and probabilistically segmenting a sample population into different segments, such as latent class analysis (LCA) [33]. LCA explores deterministic heterogeneity by incorporating explanatory variables as multiplicative interaction terms. We used latent class analysis (LCA) in STATA 18 (StataCorp LLC, 2023) which addresses the issue of unobserved preferences of patients by probabilistically segmenting a sample population into different groups or “classes” based on a latent variable [34]. The LCL model might explain, for example, that patients who had previous imaging services are more likely to fall into the class that is more sensitive to appointment wait time, while older patients might be more likely to fall into the class that is more sensitive to PCP recommendation. Class membership is first defined by a membership function including the indicator variables, after which the utility functions of different classes can be estimated.
We used both approaches in this study to seek some understanding of the relative merits of both modeling strategies, each regarded as an advanced interpretation of discrete choice models, as others have done [35]. Both models offer alternative ways of capturing unobserved heterogeneity and other potential sources of variability in unobserved sources of utility [35].
Results
Descriptive results
A total of 84 patients answered questions about preferences for attributes of an X-ray; 86 people responded to the MRI choice questions. The summary statistics are reported in Table 3. On average, patients answering choice questions on X-ray tended to be female (60%), white (91%) and live in rural areas (58%); about half had private insurance (46%) and few had met their insurance deductible (21%). Patients answering the choice questions on the MRI were similar, with 67% female, 96% white, 69% in rural areas, 45% with private insurance, 22% had met their insurance deductible. Patients receiving the X-ray choice questions had had an average of 4.8 previous images while MRI patients had had 2.9.
[Figure omitted. See PDF.]
Mixed multinomial logit results
The results of MMNL model are shown in Table 4 where we separated results for X-ray and MRI. We used 1000 iterations in both models. The literature on the number of Halton draws required for valid random parameter estimation with DCE data suggests that, depending on the number of random parameters, stable mixed-logit estimation requires at least 1000 draws. We found that out-of-pocket costs, interpreting doctor specialty, whether or not the clinic was recommended by the primary care provider, the wait time to results, the clinic service rating and online scheduling were all statistically significant and had the expected signs for both MRI and X-ray. Patients were less likely to choose a clinic if the out-of-pocket costs were higher and the wait time to results was longer, but more likely to choose it if images were interpreted by a specialty radiologist, the clinic was recommended to them by their primary care provider, the service rating was higher and if online scheduling was available. For X-ray, free parking was associated with a higher probability of choosing a clinic.
[Figure omitted. See PDF.]
The attributes that mattered the most to patients for both MRI and X-ray were specialty radiologist reading (0.732, standard error = 0.116 for MRI; 0.374, se = 0.122) for X-ray); recommendation by the primary care provider reading (0.626, se = 0.112 for MRI; 0.678, se = 0.155 for X-ray) and the clinic’s service rating reading (0.520, se = 0.096 for MRI; 0.451, se = 0.126 for X-ray).
Latent class analysis
Table 5 shows the results of the Latent Class model where age, gender, education, income, whether someone had more than two previous scans, whether they considered themselves healthier than others, whether they had private insurance and whether they were more likely to take risks than others were the indicator variables estimating the probability of class membership. We compared the results of a 3-class model; 4-class model and 5-class model and found the best model fit for the 2-class model, based on the log likelihood and Bayesian Information Criteria (BIC). We found that 54.7 percent of respondents were in class 1 and 45.3 percent in class 2. We found that females, older patients, whether they had at least two previous scans and those who were less likely to take risks than others were highly predictive of being in class 1 (p < 0.02) which we therefore labeled as the “Experienced Patients” class. Those who had fewer than two previous scans were more likely to be in class 2 (2.7111, p < 0.01) as were male patients (-0.8479, p = 0.02) and younger patients (-1.7197, p = 0.02), which we labeled “New Patients”. The results show that someone who had a 1-point increase in the Likert scale for “more likely to take risks than others” was significantly more likely to be in class 2 (-3.1354.7277, p = 0.01).
[Figure omitted. See PDF.]
When decentralizing some services away from the main hospital, the focus should be on passing on the medical center’s high service rating to the HORD and offering online scheduling. Indeed, patients value easier access, shorter wait times and lower out-of-pocket costs which the growing popularity of RRCs has shown. HORDs can win in popularity by making sure they attain and retain high star ratings while offering better access than RRCs. Primary care providers can potentially play an important role in directing patients to HORDs for their diagnostic imaging services.
For patients in class 1 (“Experienced Patients”), costs (-0.1774), specialty read (0.3512), PCP recommendation (0.3890), travel time (-0.1474), wait time (-0.1165), service (0.3033) and online scheduling (0.1240) all had significant effects (p < 0.01) and had the expected signs. For example, patients in the “Experienced Patients” class will be more likely to choose a clinic location if it is lower cost, takes less travel time, has shorter wait times for results, if results are read by a specialty radiologists, the clinic is recommended by their PCP, service is better and online scheduling is available. Patients in the” New Patients” class only cared about online scheduling (4.7542) and service rating (1.1149), but the effect size was high. None of the other attributes of the service would affect their choice.
Marginal rates of substitution
Table 6 shows an analysis of the trade-offs patients were willing to make, known as the marginal rates of substitution. We found that for patients in the Experienced Patient class, even though most attributes significantly affected their choice for clinic, the effect size was considerably smaller than for the New Patients class. Experienced Patients were willing to pay: $2 more than what they currently pay (out-of-pocket) to have their images read by a radiologist specialist; $2 more to go to a clinic that was recommended by their PCP; $0.70 more for online scheduling; $1.70 more for a 1-point higher star rating; $0.80 more to have a clinic that would be 1 minutes closer than their current one; $0.65 more for a clinic that had a 1-hour shorter wait and $0.17 more for a clinic that would decrease their walk-up time by 1 minute. The New Patients were willing to pay $17 more for online scheduling and $4 more for a higher star rating.
[Figure omitted. See PDF.]
Attribute importance
Patients were also asked to rank-order the attributes in how they prioritized them when making a choice for a clinic. This allows us to ask respondents if there was any attribute non-attendance. This means that when processing the attributes, some patients may not consider particular attributes at all. The violin plot in Fig 2 shows the results of the question on attribute importance. The results reflect self-reported order of importance which may not be consistent with how respondents subconsciously assign values to attributes of a choice in the DCE. We see that the results are consistent with the ML results: the most important attributes included interpreting doctor specialty level, PCP recommendation, and costs. There was a small difference between the X-ray and MRI arms: among X-ray patients, service rating and parking access also ranked highly.
[Figure omitted. See PDF.]
Discussion
In this paper, we sought to identify patient preferences for outpatient diagnostic imaging services in the service area of a medium sized academic medical center. We analyzed how patients make trade-offs between attributes of services using a discrete choice experiment. We explored patient’s individual preference heterogeneity for these services and reported how preferences vary among and within different groups of patients. In our base analysis, we found that specialty reading of images, PCP recommendation, lower costs, travel time and wait time, as well as higher star rating (representing better service or reputation) and online scheduling are all significant predictors of choice regarding where to get diagnostic imaging services. However, when we segmented the sample population deterministically, we found that males, younger people, and people who are more likely to take risks than others only cared about online scheduling and service rating of the facility. We termed this group “New Patients” as they had significantly fewer previous scans and do not highly value health services. Insurance status, health status and chronic conditions, education and income did not define class membership.
Overall, our analysis showed that the “Experienced Patients” subgroup value diagnostic imaging services in both acute and chronic situations care about different attributes of imaging services in HORDs than “New Patients” who significantly valued the service rating of the hospital outpatient radiology departments (HORD) and online scheduling. This study was performed within the hospital service area of a midsize academic medical center in a rural area in the Northeast of the United States, so it is unclear how these results translate to patient preferences at the national level. External validity remains a challenge for any DCE study even though, while an important component, it has been argued by others that the investigation of external validity should be much broader than a comparison of final outcomes 40.
Overall, however, we conclude from our findings that HORDs can be more competitive by providing services that live up to expectations better than available retail radiology clinics (RRCs). Most RRCs do not currently offer online scheduling, so ease of access may also steer patients towards HORDs. HORDs have the advantage of being linked to the main medical center which has the reputation of having clinical expertise and more sophisticated technology. We conclude that there is room for medical centers to build HORDs that provide an appealing and competitive alternative to the current RRC.
These results also suggest that decision-makers looking to decentralize imaging services while incorporating patient preferences for attributes of those services should differentiate between the different patient sub-populations they are serving. This requires careful consideration of patient characteristics as well as preferences. Overall, we found in this study that New Patients care about the reputation or star rating of a clinic and online scheduling availability. But additionally, Experienced Users – who will be most radiology users – are focused on wait time, price, and recommendations from primary care providers. This may be concrete message to medical centers seeking to decentralize their services away from the main hospital: patients will want to know that the new location offers the same service level in addition to convenience and will be relying on their primary care providers for advice, suggesting outreach to primary care providers will be important for success. Follow-up work, using a larger sample size, should further analyze preference heterogeneity and establish in more detail how trade-offs differ between and within individual patients. Cheaper services may be important to some while the service level matters more to others compared to costs. Studying this preference heterogeneity in more detail will provide a better understanding of these trade-offs and potential take-up of new, decentralized services.
Follow-up work should also establish if there is preference heterogeneity for concierge radiology in general. Concierge services may be decentralized and focus on offering direct access to a subspecialty-trained radiologist, dedicated resources, and a standard turnaround time for image interpretation. A personalized, patient-centered, and attentive approach to image acquisition, interpretation, and reporting leads to a higher level of customer service, but first we need to understand what patient preferences and plan health care service delivery accordingly.
Optimizing patient satisfaction may require a new communication model. This study just focused on these aspects of the services, however, and the authors were not able to assess trade-offs that patients make from the survey data.
Limitations
Although Centiment used a quota sampling approach, the gender balance may not be reflective of the total population, although we do not believe this is a major threat to external validity. However, while our sample is largely representative of the larger population in this hospital service area of the academic medical center in the Northeast, we measure intention for hypothetical choices and cannot say for sure that these consistently translate to real-life behavioral trade-offs, especially in acute situations. We do expect that the isolated study setting may have influenced the results since respondents in this study do not have very many options in real life. We expect the results to look different for less rural areas where there is already more competition between clinics to offer retail radiology services. Therefore, more work needs to be done to further explore factors that affect decision-making and preferences in these circumstances. We will extend this study to a national DCE including respondents from different areas to see to what extent aspects like rurality, access to care, and choice between different clinics in the region affects attributes of the choice such as parking availability and service level or reputation. We will then also be able to test different heuristics that patients may use when making decisions about seeking care away from their usual sources of care.
The 14 DCE choice sets included 3 alternatives each and 10 attributes with various levels which may be a cognitive burden for respondents. We pre-tested the design with 20 participants and asked them if they felt it was a cognitive burden. None of the respondents answered with “yes”. On average, they took 12 minutes to complete the survey.
Conclusions
In this study, we analyzed the trade-offs patients make between attributes of radiology services to inform decision-making around designing optimal HORDs. Our analysis showed that a patient population can and should be segmented into subgroups that evaluate the value of imaging services differently. The “Experienced Patients” subgroup generally value diagnostic imaging services in both acute and more chronic situations and they had a strong preference for a HORD that would provide services at lower costs, where their images would be interpreted by a specialty radiologist, the clinic would be recommended by their PCP, online scheduling would be available, service rating were higher, and travel and wait times would be shorter. HORDs can therefore be more competitive by providing services that live up to these expectations better than available RRCs. The goal of this study was to get a better understanding of how trade-offs between the attributes are being made and whether these preferences are different for patients who currently frequently visit the hospital for imaging services and those who do not.
What we learned from this study is that most RRCs do not currently offer online scheduling so ease of use may also steer potential future patients towards HORDs. There is an opportunity for hospitals to decentralize some of their service and regain patients to services delivered by the hospital which may help minimize secondary reads. More importantly, HORDs have the advantage of being linked to the main medical center which has the reputation of having clinical expertise and more sophisticated technology. We conclude that there is room for medical centers to build HORDs that provide an appealing and competitive alternative to the current RRC. RRCs have the risk of unreliable quality in technology and imaging interpretation, and it is therefore desirable that HORDs provide the same or more benefits while maintaining the quality care.
Supporting information
Appendix 1.
Scenario introduction.
https://doi.org/10.1371/journal.pone.0301404.s001
(DOCX)
Appendix 2.
Demographics participants.
https://doi.org/10.1371/journal.pone.0301404.s002
(DOCX)
Appendix 3.
Survey text.
https://doi.org/10.1371/journal.pone.0301404.s003
References
1. 1. Levin DC, Parker L, Rao VM. Recent trends in imaging use in hospital settings: implications for future planning. J Am Coll Radiol. 2017;14(3):331–6. pmid:27884633
* View Article
* PubMed/NCBI
* Google Scholar
2. 2. Smith-Bindman R, Kwan M, Marlow E, Theis M, Bolch W, Cheng S. Trends in use of medical imaging in US health care systems and in Ontario, Canada, 2000-2016. JAMA. 2019;322:843–56.
* View Article
* Google Scholar
3. 3. Boland GWL. Diagnostic imaging centers for hospitals: a different business proposition for outpatient radiology. J Am Coll Radiol. 2007;4(9):581–3. pmid:17845959
* View Article
* PubMed/NCBI
* Google Scholar
4. 4. Iglehart JK. The new era of medical imaging--progress and pitfalls. N Engl J Med. 2006;354(26):2822–8. pmid:16807422
* View Article
* PubMed/NCBI
* Google Scholar
5. 5. Scott Ashwood J, Reid RO, Setodji CM, Weber E, Gaynor M, Mehrotra A. Trends in retail clinic use among the commercially insured internet. Available from: www.ajmc.com
* View Article
* Google Scholar
6. 6. Herzog R, Elgort DR, Flanders AE, Moley PJ. Variability in diagnostic error rates of 10 MRI centers performing lumbar spine MRI examinations on the same patient within a 3-week period. Spine J. 2017;17(4):554–61. pmid:27867079
* View Article
* PubMed/NCBI
* Google Scholar
7. 7. Johnson FR, Beusterien K, Özdemir S, Wilson L. Giving patients a meaningful voice in United States regulatory decision making: the role for health preference research. Patient. 2017;10(4):523–6. pmid:28597374
* View Article
* PubMed/NCBI
* Google Scholar
8. 8. Kostrubiak DE, DeHay PW, Ali N, D’Agostino R, Keating DP, Tam JK, et al. Body MRI subspecialty reinterpretations at a Tertiary Care Center: discrepancy rates and error types. AJR Am J Roentgenol. 2020;215(6):1384–8. pmid:33052740
* View Article
* PubMed/NCBI
* Google Scholar
9. 9. Centers for Medicare & Medicaid Services. CMS quality measure development plan: supporting the transition to the Merit-based Incentive Payment System (MIPS) and Alternative Payment Models (APMs) Internet. 2016. Available from: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf
* View Article
* Google Scholar
10. 10. Basu PA, Ruiz-Wibbelsmann JA, Spielman SB, Van Dalsem VF 3rd, Rosenberg JK, Glazer GM. Creating a patient-centered imaging service: determining what patients want. AJR Am J Roentgenol. 2011;196(3):605–10. pmid:21343503
* View Article
* PubMed/NCBI
* Google Scholar
11. 11. Pahade J, Couto C, Davis RB, Patel P, Siewert B, Rosen MP. Reviewing imaging examination results with a radiologist immediately after study completion: patient preferences and assessment of feasibility in an academic department. AJR Am J Roentgenol. 2012;199(4):844–51. pmid:22997377
* View Article
* PubMed/NCBI
* Google Scholar
12. 12. Cabarrus M, Naeger DM, Rybkin A, Qayyum A. Patients prefer results from the ordering provider and access to their radiology reports. J Am Coll Radiol. 2015;12(6):556–62. pmid:25892226
* View Article
* PubMed/NCBI
* Google Scholar
13. 13. Mangano MD, Bennett SE, Gunn AJ, Sahani DV, Choy G. Creating a patient-centered radiology practice through the establishment of a diagnostic radiology consultation clinic. AJR Am J Roentgenol. 2015;205(1):95–9. pmid:26102386
* View Article
* PubMed/NCBI
* Google Scholar
14. 14. Woolen S, Kazerooni EA, Wall A, Parent K, Cahalan S, Alameddine M, et al. Waiting for radiology test results: Patient expectations and emotional disutility. J Am Coll Radiol Internet. Elsevier Inc; 2018;15:274–81.
* View Article
* Google Scholar
15. 15. Kleij K-S, Tangermann U, Amelung VE, Krauth C. Patients’ preferences for primary health care - a systematic literature review of discrete choice experiments. BMC Health Serv Res. 2017;17(1):476. pmid:28697796
* View Article
* PubMed/NCBI
* Google Scholar
16. 16. Mühlbacher AC, Bethge S, Reed SD, Schulman KA. Patient preferences for features of health care delivery systems: a discrete choice experiment. Health Serv Res. 2016;51(2):704–27. pmid:26255998
* View Article
* PubMed/NCBI
* Google Scholar
17. 17. van den Broek-Altenburg E, Atherly A. Using discrete choice experiments to measure preferences for hard to observe choice attributes to inform health policy decisions. Health Econ Rev. 2020;10(1):18. pmid:32529586
* View Article
* PubMed/NCBI
* Google Scholar
18. 18. Krueger RA, Casey MA. Focus groups: a practical guide for applied research. 5th ed. 2015. p. 103–27.
19. 19. Harper M, Cole P. Member checking: can benefits be gained similar to group therapy? Qual Rep [Internet]. 2012;17(2):510–7. Available from: http://www.nova.edu/ssss/QR/QR17-2/harper.pdf
* View Article
* Google Scholar
20. 20. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol [Internet]. 2006;3(2):77–101. Available from: http://www.informaworld.com/smpp/content~db=all~content=a795127197~frm=titlelink
* View Article
* Google Scholar
21. 21. Hess S, Rose JM. Should reference alternatives in pivot design sc surveys be treated differently? Environ Resource Econ. 2008;42(3):297–317.
* View Article
* Google Scholar
22. 22. Vanniyasingam T, Cunningham CE, Foster G, Thabane L. Simulation study to determine the impact of different design features on design efficiency in discrete choice experiments. BMJ Open. 2016;6(7):e011985. pmid:27436671
* View Article
* PubMed/NCBI
* Google Scholar
23. 23. Rose JM, Bliemer MC. Stated choice experimental design theory: the who, the what and the why. In: Handbook of choice modelling. Edward Elgar Publishing; 2014. p. 152–77.
24. 24. Rose JM, Bliemer MCJ. Sample size requirements for stated choice experiments. Transportation. 2013;40(5):1021–41.
* View Article
* Google Scholar
25. 25. Jonker MF, Donkers B, de Bekker-Grob E, Stolk EA. Attribute level overlap (and color coding) can reduce task complexity, improve choice consistency, and decrease the dropout rate in discrete choice experiments. Health Econ. 2019;28(3):350–63. pmid:30565338
* View Article
* PubMed/NCBI
* Google Scholar
26. 26. Pratt-Chapman M, Moses J, Arem H. Strategies for the identification and prevention of survey fraud: data analysis of a Web-Based Survey. JMIR Cancer. 2021;7(3):e30730. pmid:34269685
* View Article
* PubMed/NCBI
* Google Scholar
27. 27. Janssen EM, Marshall DA, Hauber AB, Bridges JFP. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability? Expert Rev Pharmacoecon Outcomes Res. 2017;17(6):531–42. pmid:29058478
* View Article
* PubMed/NCBI
* Google Scholar
28. 28. Veldwijk J, Marceta SM, Swait JD, Lipman SA, de Bekker-Grob EW. Taking the shortcut: simplifying heuristics in discrete choice experiments. Patient. 2023;16(4):301–15. pmid:37129803
* View Article
* PubMed/NCBI
* Google Scholar
29. 29. Johnson FR, Yang JC, Reed SD. The internal validity of discrete choice experiment data: a testing tool for quantitative assessments. Value Health. 2019;22(2):157–60.
* View Article
* Google Scholar
30. 30. Hensher D, Greene W. The mixed logit model: the state of practice. Transp. 2003;30:133–76.
* View Article
* Google Scholar
31. 31. McFadden D, Train K. Mixed mnl models for discrete response. J Appl Econ. 2000;15(5):447–70.
* View Article
* Google Scholar
32. 32. Hole AR. Mixed logit modeling in Stata--an overview. InUnited Kingdom Stata Users’ Group Meetings 2013. 2013 Sep 16 (No. 23). Stata Users Group.
33. 33. van den Broek-Altenburg EM, Atherly AJ, Hess S, Benson J. The effect of unobserved preferences and race on vaccination hesitancy for COVID-19 vaccines: implications for health disparities. J Manag Care Spec Pharm. 2021;27(9-a Suppl):S4–13. pmid:34534008
* View Article
* PubMed/NCBI
* Google Scholar
34. 34. Yoo HI. lclogit2: an enhanced command to fit latent class conditional logit models. Stata J. 2020;20(2):405–25.
* View Article
* Google Scholar
35. 35. Greene WH, Hensher DA. A latent class model for discrete choice analysis: contrasts with mixed logit. Transport Res Part B: Methodol. 2003;37(8):681–98.
* View Article
* Google Scholar
Citation: van den Broek-Altenburg EM, Benson JS, Atherly AJ, DeStigter KK (2025) Patient preferences for diagnostic imaging services: Decentralize or not? PLoS One 20(5): e0301404. https://doi.org/10.1371/journal.pone.0301404
About the Authors:
Eline M. van den Broek-Altenburg
Roles: Conceptualization, Data curation, Formal analysis, Funding acquisition, Methodology, Project administration, Software, Supervision, Writing – original draft, Writing – review & editing
E-mail: [email protected]
Affiliation: Larner College of Medicine, University of Vermont, Burlington, USA,
ORICD: https://orcid.org/0000-0002-4831-9083
Jamie S. Benson
Roles: Conceptualization, Data curation, Formal analysis, Writing – review & editing
Affiliation: Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA
ORICD: https://orcid.org/0000-0002-0709-4711
Adam J. Atherly
Roles: Methodology, Supervision, Writing – review & editing
Affiliation: College of Health Professions, Virginia Commonwealth University, Richmond, USA
Kristen K. DeStigter
Roles: Conceptualization, Supervision, Writing – review & editing
Affiliation: Larner College of Medicine, University of Vermont, Burlington, USA,
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
1. Levin DC, Parker L, Rao VM. Recent trends in imaging use in hospital settings: implications for future planning. J Am Coll Radiol. 2017;14(3):331–6. pmid:27884633
2. Smith-Bindman R, Kwan M, Marlow E, Theis M, Bolch W, Cheng S. Trends in use of medical imaging in US health care systems and in Ontario, Canada, 2000-2016. JAMA. 2019;322:843–56.
3. Boland GWL. Diagnostic imaging centers for hospitals: a different business proposition for outpatient radiology. J Am Coll Radiol. 2007;4(9):581–3. pmid:17845959
4. Iglehart JK. The new era of medical imaging--progress and pitfalls. N Engl J Med. 2006;354(26):2822–8. pmid:16807422
5. Scott Ashwood J, Reid RO, Setodji CM, Weber E, Gaynor M, Mehrotra A. Trends in retail clinic use among the commercially insured internet. Available from: www.ajmc.com
6. Herzog R, Elgort DR, Flanders AE, Moley PJ. Variability in diagnostic error rates of 10 MRI centers performing lumbar spine MRI examinations on the same patient within a 3-week period. Spine J. 2017;17(4):554–61. pmid:27867079
7. Johnson FR, Beusterien K, Özdemir S, Wilson L. Giving patients a meaningful voice in United States regulatory decision making: the role for health preference research. Patient. 2017;10(4):523–6. pmid:28597374
8. Kostrubiak DE, DeHay PW, Ali N, D’Agostino R, Keating DP, Tam JK, et al. Body MRI subspecialty reinterpretations at a Tertiary Care Center: discrepancy rates and error types. AJR Am J Roentgenol. 2020;215(6):1384–8. pmid:33052740
9. Centers for Medicare & Medicaid Services. CMS quality measure development plan: supporting the transition to the Merit-based Incentive Payment System (MIPS) and Alternative Payment Models (APMs) Internet. 2016. Available from: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf
10. Basu PA, Ruiz-Wibbelsmann JA, Spielman SB, Van Dalsem VF 3rd, Rosenberg JK, Glazer GM. Creating a patient-centered imaging service: determining what patients want. AJR Am J Roentgenol. 2011;196(3):605–10. pmid:21343503
11. Pahade J, Couto C, Davis RB, Patel P, Siewert B, Rosen MP. Reviewing imaging examination results with a radiologist immediately after study completion: patient preferences and assessment of feasibility in an academic department. AJR Am J Roentgenol. 2012;199(4):844–51. pmid:22997377
12. Cabarrus M, Naeger DM, Rybkin A, Qayyum A. Patients prefer results from the ordering provider and access to their radiology reports. J Am Coll Radiol. 2015;12(6):556–62. pmid:25892226
13. Mangano MD, Bennett SE, Gunn AJ, Sahani DV, Choy G. Creating a patient-centered radiology practice through the establishment of a diagnostic radiology consultation clinic. AJR Am J Roentgenol. 2015;205(1):95–9. pmid:26102386
14. Woolen S, Kazerooni EA, Wall A, Parent K, Cahalan S, Alameddine M, et al. Waiting for radiology test results: Patient expectations and emotional disutility. J Am Coll Radiol Internet. Elsevier Inc; 2018;15:274–81.
15. Kleij K-S, Tangermann U, Amelung VE, Krauth C. Patients’ preferences for primary health care - a systematic literature review of discrete choice experiments. BMC Health Serv Res. 2017;17(1):476. pmid:28697796
16. Mühlbacher AC, Bethge S, Reed SD, Schulman KA. Patient preferences for features of health care delivery systems: a discrete choice experiment. Health Serv Res. 2016;51(2):704–27. pmid:26255998
17. van den Broek-Altenburg E, Atherly A. Using discrete choice experiments to measure preferences for hard to observe choice attributes to inform health policy decisions. Health Econ Rev. 2020;10(1):18. pmid:32529586
18. Krueger RA, Casey MA. Focus groups: a practical guide for applied research. 5th ed. 2015. p. 103–27.
19. Harper M, Cole P. Member checking: can benefits be gained similar to group therapy? Qual Rep [Internet]. 2012;17(2):510–7. Available from: http://www.nova.edu/ssss/QR/QR17-2/harper.pdf
20. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol [Internet]. 2006;3(2):77–101. Available from: http://www.informaworld.com/smpp/content~db=all~content=a795127197~frm=titlelink
21. Hess S, Rose JM. Should reference alternatives in pivot design sc surveys be treated differently? Environ Resource Econ. 2008;42(3):297–317.
22. Vanniyasingam T, Cunningham CE, Foster G, Thabane L. Simulation study to determine the impact of different design features on design efficiency in discrete choice experiments. BMJ Open. 2016;6(7):e011985. pmid:27436671
23. Rose JM, Bliemer MC. Stated choice experimental design theory: the who, the what and the why. In: Handbook of choice modelling. Edward Elgar Publishing; 2014. p. 152–77.
24. Rose JM, Bliemer MCJ. Sample size requirements for stated choice experiments. Transportation. 2013;40(5):1021–41.
25. Jonker MF, Donkers B, de Bekker-Grob E, Stolk EA. Attribute level overlap (and color coding) can reduce task complexity, improve choice consistency, and decrease the dropout rate in discrete choice experiments. Health Econ. 2019;28(3):350–63. pmid:30565338
26. Pratt-Chapman M, Moses J, Arem H. Strategies for the identification and prevention of survey fraud: data analysis of a Web-Based Survey. JMIR Cancer. 2021;7(3):e30730. pmid:34269685
27. Janssen EM, Marshall DA, Hauber AB, Bridges JFP. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability? Expert Rev Pharmacoecon Outcomes Res. 2017;17(6):531–42. pmid:29058478
28. Veldwijk J, Marceta SM, Swait JD, Lipman SA, de Bekker-Grob EW. Taking the shortcut: simplifying heuristics in discrete choice experiments. Patient. 2023;16(4):301–15. pmid:37129803
29. Johnson FR, Yang JC, Reed SD. The internal validity of discrete choice experiment data: a testing tool for quantitative assessments. Value Health. 2019;22(2):157–60.
30. Hensher D, Greene W. The mixed logit model: the state of practice. Transp. 2003;30:133–76.
31. McFadden D, Train K. Mixed mnl models for discrete response. J Appl Econ. 2000;15(5):447–70.
32. Hole AR. Mixed logit modeling in Stata--an overview. InUnited Kingdom Stata Users’ Group Meetings 2013. 2013 Sep 16 (No. 23). Stata Users Group.
33. van den Broek-Altenburg EM, Atherly AJ, Hess S, Benson J. The effect of unobserved preferences and race on vaccination hesitancy for COVID-19 vaccines: implications for health disparities. J Manag Care Spec Pharm. 2021;27(9-a Suppl):S4–13. pmid:34534008
34. Yoo HI. lclogit2: an enhanced command to fit latent class conditional logit models. Stata J. 2020;20(2):405–25.
35. Greene WH, Hensher DA. A latent class model for discrete choice analysis: contrasts with mixed logit. Transport Res Part B: Methodol. 2003;37(8):681–98.
This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication: https://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.