Content area
Diffusion of innovations (DOI) theory identifies critical factors that influence technology adoption rates and offers a predictive model for understanding how innovations spread through populations. While DOI theory encompasses six key perceptual characteristics (relative advantage, compatibility, complexity, trialability, observability, and reinvention), most empirical research operationalizes only Rogers’ five core attributes, rarely integrating reinvention despite its theoretical importance for understanding post-adoption adaptation. This research develops and validates a comprehensive scale measuring all six DOI characteristics, with particular attention to the reinvention construct. Through three independent samples (n = 2,019), we test the scale’s validity within a nomological network, creating an adaptable instrument for studying innovation diffusion that captures the full scope of DOI theory.
Introduction
Diffusion of innovations (DOI) theory is central to scholarship across various disciplines including Communication, Management, and Information Technology. DOI theory identifies critical factors that influence the rate of technology adoption and offers a predictive model to understand human perceptions that contribute to innovation adoption by different segments of the population. DOI is one of the most prominent media effects theories to date with over 94,000 citations [1]. Through application of DOI theory, scholars are able to hone in on processes and outcomes that influence rates of technological adoption. A strength of this theory is its ability to apply to the adoption of a wide range of innovations, such as hybrid corn seed [2], transportation biking [3], insect consumption [4], and autonomous vehicles [5]. Indeed, a major advantage of DOI theory is its tendency to cultivate focus on the capabilities of a variety of innovations. Nonetheless, as new innovations and variations of old innovations arise, we may need to revisit the generality of measures we use to operationalize the perceptions of innovation characteristics that are outlined in DOI theory.
The purpose of this research is to create a measure for assessing perceptions of innovations that contribute to innovation diffusion, including the often-overlooked reinvention construct. Reinvention is critical to DOI theory because it reflects how innovations are adapted and reshaped by adopters during the diffusion process—making the innovation more compatible with the adopters’ needs, values, or social infrastructure. While DOI theory has been successfully applied to a variety of innovations (cf., [6]), existing measurement approaches face limitations. Most empirical studies operationalize only a subset of Rogers’ five core perceptual attributes – relative advantage, compatibility, complexity, trialability, and observability [6]. These studies consistently exclude reinvention despite its theoretical relevance to post-adoption adaptation behaviors that are particularly common in software contexts. This omission of reinvention marks a significant gap in DOI research, especially in the current era shaped by artificial intelligence. Software innovations, unlike many hardware innovations, are characterized by their malleability and capacity for user-driven customization and adaptation. Users frequently modify software applications, configure settings, and adapt functionality to their specific needs. These behaviors align directly with Rogers’ conceptualization of reinvention as “the degree to which an innovation is changed or modified by a user in the process of its adoption and implementation” [2]. The iterative nature of software development, with frequent updates and user feedback cycles, makes reinvention particularly relevant for understanding how software innovations diffuse and evolve within user communities.
Over the past half-century, DOI theory has given rise to several measures assessing the six key perceptual characteristics, or attributes of innovations, that determine the perception and use of a particular innovation: relative advantage, compatibility, complexity, trialability, observability, and reinvention [2]. However, existing scales face two primary limitations. First, most studies measure only a subset of the six theoretical constructs, with reinvention being particularly neglected. Second, existing measures are typically developed and validated for single innovations [3,7–10], forcing researchers who want to measure the DOI framework to piece together elements of different scales to adapt them to the innovation under investigation.
We addressed these limitations by developing and validating a comprehensive 19-item survey instrument that measures all six perceived innovation characteristics proposed by DOI theory. The scale uses a flexible template format where researchers can insert specific innovations and tasks (e.g., “[Innovation] allows me to accomplish tasks such as [task] more efficiently”), making it adaptable across different contexts while maintaining measurement consistency. To our knowledge, this is the first validated measure to assess all six DOI characteristics across contexts using a standardized, adaptable format. Across three studies, we generated items, assessed the factor structure of our comprehensive DOI scale, and evaluated its associations with related psychological and behavioral constructs, consistent with its theoretical network.
Overview of diffusion of innovations
Compatibility
Compatibility is the degree to which an innovation is consistent with existing values, needs, and experiences [2]. An innovation can be more or less compatible with (1) cultural values and believes, (2) previous ideas, or (3) the adoptee’s needs for the innovation [2]. Perceptions of compatibility are positively associated with innovation adoption. Prior work has measured compatibility within a variety of contexts, such as smartwatches [11], personal workstations [12], online purchasing [11], and mobile banking [13]. Compatibility is generally measured by asking if an innovation meets the needs of the adoptee, is able to seamlessly integrate into one’s life, and fits well with the ways that a person likes to do things.
Complexity
Complexity is the degree to which an innovation is perceived as being hard or difficult to use or understand [2].Complexity has been measured by assessing how frustrating or difficult a person perceives an innovation [11], which is negatively associated with innovation adoption, and by assessing how easy or simple an innovation is to use [12], which is positively associated with innovation adoption. Either approach is an acceptable conceptual definition of complexity. Researchers sometimes prefer ease of use questions so that the directionality of complexity is aligned with other DOI characteristics [14]. Software may be seen as more complex than hardware as the underlying mechanisms that allow software to function (i.e., bits of code) may be more difficult to understand because they often experience continuous updates, can be highly customizable, and are sometimes hidden by design. Complexity has been measured to predict innovation adoption in a variety of contexts, such as evidence-based practices in healthcare [15], e-readers [16], and e-health tools [14].
Relative advantage
The extent to which an innovation is perceived as superior to its predecessor or to its alternative is known as relative advantage [2]. An innovation may be perceived as high in relative advantage when a person views the innovation as having higher social status, economic status, efficiency, or benefits than the preceding or alternate innovation [2]. Relative advantage is positively associated with innovation adoption and is commonly measured by comparing an innovation under investigation to named or unnamed prior or current alternatives. For example, participants may be asked if an innovation enables them to generally accomplish tasks more quickly [12,13], or by comparing the innovation to a specific alternative [11]. Hardware innovations often bring tangible improvements in terms of performance, durability, or functionality, making it easier for users to recognize their advantages over older hardware. Software innovations often bring improvements by allowing users to access different affordances or features, which are often rolled out slowly to an already existing software package. Relative advantage has been used to explain the diffusion of innovations such as cover cropping [17] and smartwatches [11].
Observability
Observability is the degree to which an innovation is noticeable or communicable to others, and is positively associated with innovation adoption. When an innovation is observable, people can see how it benefits others and how to use the innovation. In general, software-based technologies are less observable than hardware-based technologies [2]. Users can physically see and touch hardware devices, making it easier to assess their quality and functionality. This visibility can lead to quicker adoption as users have a clearer understanding of the innovation’s use and advantages. Observability has been measured by asking the degree to which people are able to see the benefits of an innovation (e.g., [12]), how to use the innovation, and how others are using the innovation. Observability has been used to explain innovation adoption in contexts such as adoption of mobile applications [9] and personal work stations [12].
Trialability
Trialability is the degree to which an individual is able to try or test an innovation before adopting it, and is positively associated with innovation adoption [2]. For example, if someone has a two-week free trial of a videogame, they may be more likely to try the game to see if they would like to adopt it. Trialability is often assessed by asking participants if they were able to use an innovation before adopting it [11,12], or if trialing an innovation contributed to their adoption decision. Hardware innovations can be easier to test compared to software. Users can physically interact with hardware innovations and see how they fit into existing routines. This hands-on experience can lead to quicker adoption, as users can assess the hardware’s compatibility and usefulness more directly. Conversely, the barrier to trying certain software may be lower than hardware as one can often trial new software from home. Trialability has been associated with innovation adoption in contexts such as cover crops [17] and e-health [14].
Reinvention
Reinvention is the degree to which a user [21] changes or modifies an innovation [2]. After adopting an innovation, users often adapt or reinvent the innovation to better suit their needs [18,19]. Reinvention is associated with increased rates of innovation use [2] and greater sustainability or continuity of use. Reinvention has been thought of as both an innovation adoption type (i.e., an innovation can either be adopted, rejected, or reinvented) and also as a perceptual characteristic of an innovation (e.g., one could perceive an innovation to be easily customizable) [20]. Reinvention has been used to explain, among other innovations, how political policies change over time [21] as well as how modifying messages changes the diffusion of information in social networks [22].
Despite both Rogers’ conceptualization of reinvention as a core innovation characteristic [2,19] as well as its usefulness in explaining innovation adoption [19,21,22] empirical DOI research has excluded this construct for several reasons. First, there is a historical bias toward pre-adoption factors: early diffusion studies prioritized the five core perceptual characteristics (relative advantage, compatibility, complexity, trialability, and observability) that influence initial adoption decisions [23,24], whereas reinvention often occurs post-adoption, during the implementation phase [19]. As a result, reinvention has been treated as a peripheral outcome rather than a central perception guiding adoption decisions.
Second, prior studies have typically focused on hardware innovations or tightly scoped software contexts, where reinvention was either less observable or difficult to quantify [5,8,9,14,17]. For example, early DOI studies examining the adoption of agricultural equipment or medical devices focused on binary adoption decisions where users had limited opportunities to modify the innovation’s core functionality. While this approach worked well for many hardware innovations, software innovations present fundamentally different characteristics that challenge traditional DOI measurement approaches; hardware innovations are often more tangible and observable than software. Users can physically see and touch hardware devices, making it easier to assess their quality and functionality. Hardware innovations also have different life cycles and costs than software. Therefore, they are promoted differently and have distinct competitive dynamics. While hardware may diffuse more rapidly in certain circumstances, software innovations have unique advantages and characteristics that can also lead to rapid adoption. Among software innovations, reinvention is likely to play a central role in determining whether and how an innovation will be adopted and sustained. Unlike hardware, software is inherently malleable as users can adjust configurations, find novel use cases, and engage in iterative customization over time. These adaptation behaviors align directly with the concept of reinvention and help explain post-adoption engagement, variation in use, and long-term retention. Innovations that afford greater reinvention may be perceived as more useful, flexible, and compatible with individual workflows, which can in turn increase diffusion rates.
Third, and finally, conceptual ambiguity and definitional inconsistency have impeded the inclusion of reinvention in empirical DOI studies. Rogers [2] conceptualized reinvention as both an adoption outcome (innovations can be adopted, rejected, or reinvented) and a perceptual characteristic (some innovations are perceived as more modifiable), creating theoretical confusion among researchers about how the construct should be measured. Additionally, temporal complexity poses barriers to measuring reinvention because reinvention unfolds dynamically throughout adoption and implementation processes, as demonstrated by [19]. This temporal complexity makes it difficult for researchers to capture ongoing modifications and establish appropriate measurement timeframes. Context dependency further complicates measurement as reinvention manifests differently across organizational and technological contexts, creating trade-offs between context-specific validity and theoretical generalizability [25].
This paper resolves conceptual ambiguity by explicitly treating reinvention as a perceptual characteristic (i.e., users’ beliefs about an innovation’s modifiability) rather than as an adoption outcome. By focusing on reinvention as a perception, we can capture individual-level variation in how users anticipate shaping or adapting an innovation, even if they have not yet done so in practice. This is especially useful in digital and software contexts, where modifiability is often expected and innovations are frequently tailored in real time. Defining reinvention as a perceptual characteristic also avoids conflating the causes and consequences of reinvention, allowing researchers to examine its influence on adoption separately from its downstream effects. In addition, treating reinvention as a perceptual characteristic offers practical advantages for measurement. Specifically, it allows researchers to assess reinvention without needing to track post-adoption behaviors, which pose logistical challenges. Many studies lack the resources to follow users over time, and as a result, important dimensions of innovation adoption are overlooked. A perception-based approach provides a feasible alternative that still captures meaningful variation in how users engage with innovations. Finally, we tackle generalizability challenges through a flexible template approach that allows researchers to insert specific innovations and tasks while maintaining measurement consistency across contexts. Incorporating reinvention into DOI models provides explanatory leverage to account for seemingly similar software tools that experience divergent adoption outcomes. By including reinvention as a core measured attribute, this paper extends the DOI framework’s applicability to digital innovation ecosystems where user-driven modification is a normative and expected behavior.
Beyond establishing a measure for reinvention, the goal of this research is to develop a scale to measure the well-established constructs within DOI theory. Although compatibility, complexity, relative advantage, trialability, observability, and reinvention have been measured extensively throughout the lifespan of DOI theory, most measures either do not focus on assessing software innovations (e.g., [11,12]), have not been updated in the past decade, or do not measure all six perceptual characteristics of DOI theory [11,12,26]. To address these gaps, this research develops a flexible DOI scale that measures the perceptual characteristics of compatibility, complexity, relative advantage, trialability, observability, and reinvention across varying software contexts. To develop and validate this scale we follow the recommendations of [27] through multi-sample validation within a robust nomological network.
Study 1: Item adaptation and CFA
Study 1 adapts questions measuring relative advantage, compatibility, complexity, trialability, observability, and reinvention to four different image recognition software contexts: facial recognition for phone unlock, social media filters, facial recognition for financial technology (i.e., biometric security), and image sensing software such as automatic water faucets. We selected these contexts as they are (1) software-based, (2) used across a variety of settings in daily life, and (3) offer different levels of alternatives to adoption. Although all four contexts employ image recognition technology, they represent distinct software innovations that we suspect will vary across perceptual DOI characteristics. While facial recognition software may come pre-installed on devices such as phones, its adoption does not always match the diffusion of the phone’s hardware. For instance, a user might choose to unlock their phone with a pin or a different kind of passcode instead of facial recognition. Similarly, a user could use a password instead of biometric security when accessing their bank account, and using social media does not mandate the use of facial filters; users have the option to select non-facial filters or not to use any filters at all. Conversely, image sensors often diffuse at the same rate as their hardware; a water faucet that uses an image sensor generally does not come with a manual alternative. By measuring four distinct contexts of image recognition technology, we are able to create a measure designed to be adapted to various software innovations.
We seek to build a flexible measure of relative advantage, compatibility, complexity, trialability, observability, and reinvention by creating a set of questions adapted from prior measures of DOI theory, cited below. Study 1 evaluates the factor structure of these items. As each DOI characteristic is conceptually distinct, items should load onto a six-factor CFA model. If the items load onto a six-factor solution consisting of at least three items per factor, then this suggests that the measures are appropriate for measuring the latent DOI perceptual characteristics and that each characteristic is conceptually distinct from one another.
Methods
Participants
We recruited 325 participants from an undergraduate research subject pool consisting of students taking classes in a communication department from May to June 2023. While the sample size required for a CFA is debated, we adhered to the recommendation by [28] of a minimum sample size of 265 to accommodate any non-normal factor indicators. Of the sample, 61% identified themselves as women (n = 199), 32% as men (n = 105), 2% as non-binary (n = 5), and 5% did not specify their gender (n = 16). Participants’ ages ranged from 19–25, with an average age of 19.95 (SD = 1.64). 22% identified as Asian (East Asian 58, South Asian 12, Mixed Asian 2), 2% as Black/African American (7), 15% as Latino (49), 38% as White (123), 3.6% as mixed White/Latino (12), and 15% as either mixed race or other (49). Thirteen participants (0.4%) did not specify their race.
This research was reviewed and approved by the University of California, Santa Barbara Human Subjects Committee (Protocol #4-23-0221). The study was determined to be exempt under Category 2 of 45 CFR 46.104(d), which covers survey procedures involving adult participants. All participants provided written informed consent prior to participation.
Procedure
In the initial phase of our scale development process, we drew upon existing DOI measures to construct six subscales measuring relative advantage [13], compatibility [12,13,16], complexity [12], trialability [12,14], observability [12,29], and reinvention [2]. This preliminary set of items was derived from a careful review of DOI literature, with a focus on adapting questions that had demonstrated utility in previous research contexts. Considering the well-developed theoretical foundation of DOI research, we determined that adapting existing measures would be both efficient and would allow for better comparability with prior studies. To evaluate the applicability of these questions, we asked participants to respond to the set of questions across four different image recognition-based software contexts as discussed above: facial recognition for phone unlock (phone unlock), social media filters, facial recognition for financial technology (finances), and image sensing software such as automatic water faucets (image sensors).
We conducted a confirmatory factor analysis (CFA) with maximum likelihood estimation using a variance-covariance matrix on Mplus [30]. Following the recommendation of [27] we chose a CFA over an exploratory factor analysis (EFA) because we (1) adapted existing scales and (2) have a sound theoretical basis and understanding of the underlying factor structures of DOI. We specified and ran four different models, one for each image recognition-based software contexts (i.e., phone unlock, finances, social media filters and image sensors), each of which comprised six factors, one for each of the six diffusion of innovation perceptual characteristics under investigation (i.e., compatibility, complexity, relative advantage, observability, trialability and reinvention).
Measures
We measured all responses on a 5-point scale from (1) strongly disagree to (5) strongly agree.
Relative advantage
We used four items adapted from [13] to assess the degree to which the innovation is perceived as being better than preceding innovations. For example, “[innovation] allows (would allow) me to accomplish tasks, such as [task], more efficiently,” (e.g., “Facial recognition technology allows (would allow) me to accomplish tasks, such as unlocking my phone, more efficiently”) (Phone Unlock, M = 3.64, SD = 0.66; Finances, M = 3.70, SD = 0.89; Social Media Filters, M = 2.99, SD = 0.93; Image Sensors, M = 3.58, SD = 0.78).
Compatibility
Four items adapted from [16] evaluated the perceived compatibility of image recognition-based software innovations with participants’ past experiences, beliefs, and values. For example, “[innovation] fits (would fit) well with the way that I like to [task],” (e.g., “Facial recognition technology fits (would fit) well with the way that I like to use my phone.”) (Phone Unlock, M = 4.05, SD = 0.95; Finances, M = 3.9, SD = 1.05; Social Media Filters, M = 3.19, SD = 1.16; Image Sensors, M = 3.77, SD = 0.88).
Complexity
We adapted four items from [12] to measure the perceived complexity of software-based innovations. These items assess how participants view the ease of understanding a technology. For example, “It is (would be) easy to get [innovation] to do what I want them to do when using them to [task],” (e.g., “It is (would be) easy to get facial recognition algorithms to do what I want them to do when using them to unlock a phone.”) (Phone Unlock, M = 3.91, SD = 0.72; Finances, M = 3.87, SD = 0.77; Social Media Filters, M = 3.68, SD = 0.78; Image Sensors, M = 3.66, SD = 0.76).
Observability
To assess participants’ perceived level of observability of software-based innovations, we measured four items related to the degree to which the results of innovations are observable to the participant (adapted from [29]). For example, “I am (would be) able to observe when others in my environment use [innovation] to [task],” (e.g., “I am (would be) able to observe when others in my environment use facial recognition technology to unlock a phone.”) (Phone Unlock, M = 3.42, SD = 0.85; Finances, M = 2.92, SD = 1.02; Social Media Filters, M = 3.51, SD = 0.95; Image Sensors, M = 3.40, SD = 0.85)
Trialability
We adapted four items from [14] to measure perceived trialability, which evaluates the ability to use software-based innovations before deciding to adopt them. For example, “I have (anticipate having) the ability to try out [innovation] to accomplish [task] before deciding whether I like it or not,” (e.g., “I have (anticipate having) the ability to try out facial recognition technology to unlock a phone before deciding whether I like it or not.”) (Phone Unlock, M = 4.06, SD = 0.75; Finances, M = 3.70, SD = 0.97; Social Media Filters, M = 3.64, SD = 0.88; Image Sensors, M = 3.63, SD = 0.84).
Reinvention
We measured four items adapted from [2] to evaluate participants’ perceived level of reinvention—the extent to which users can or do change or modify image recognition-based software innovations. For example, “I often have (anticipate having) to experiment with new ways of using [innovation],” (e.g., “I often have (anticipate having) to experiment with new ways of using facial recognition technology when using it to unlock my phone.”) (Phone Unlock, M = 2.58, SD = 0.84; Finances, M = 2.62, SD = 0.88; Social Media Filters, M = 2.77, SD = 0.85; Image Sensors, M = 2.85, SD = 0.81).
For the full texts of the initial measures see S2 Tables 1–4.
Results
To evaluate our data, we used standard fit criteria, considering models with a SRMR ≤ .08, a CFI and TLI ≥ .95, and an RMSEA < 0.08 a good fit [31–33].
The results of the CFA suggested a reasonable, although not ideal, fit with values of SRMR exceeding 0.08 and TLI/CFI values falling below.95 across all models; RMSEA for all models met acceptable fit criteria, with the RMSEA falling below.08. See Table 1 for model fit statistics and Table 2 for factor loadings.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
To improve the model fit, we removed items with factor loadings less than 0.6. We removed the third relative advantage question for all contexts, e.g., “The disadvantages of using [innovation] to [task] (would) outweigh the advantages”; the fourth complexity measure for all contexts, e.g., “Using [innovation] to [task] is (would be) cumbersome”; the fourth trialability measure for all contexts, e.g., “I have not had much opportunity to try [innovation] to [task] in the past”; the fourth reinvention measure for all contexts, e.g., “I rarely have (anticipate having) to come up with novel ways to get [innovation] to work for me when using it to accomplish [task]”; and the first observability measure for phone unlock and image sensor contexts, e.g., “Changes in others’ use of [innovation] would be) obvious to me.”
The results of the CFA analysis on the remaining items suggest satisfactory models on all goodness of fit statistics, with the SRMR less than 0.08 and TLI/CFI values falling above or equal to.95 across all models. As with the initial CFA, the RMSEA for all models met acceptable fit criteria, with the RMSEA falling below.08. See Table 3 for model fit statistics and Table 4 for factor updated factor loadings. To ensure parsimony we eliminated two observability items from the social media filter and finance contexts so that three questions were associated with observability across contexts.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Study 1 discussion
In Study 1, we reviewed existing literature to generate questions for measuring six perceptual innovation characteristics that underlie DOI theory. We also collected data to evaluate how the initial set of items perform across different types of software-based innovations. Study 1 offers initial support for a six-factor measure of perceptual innovation characteristics, consisting of relative advantage, compatibility, complexity, trialability, observability, and reinvention. The CFA results showed satisfactory model fit across all four contexts after item refinement, with a SRMR < 0.08, CFI/TLI ≥ 0.95, and RMSEA < 0.08; all items loaded well onto their respective factors (loadings > 0.6) and the six-factor structure was consistent across all four software-based contexts, suggesting adaptability. However, given the item modification, the scale requires further validation. Therefore, we cross-validated our results on an independent sample in Study 2.
Study 2: Replicating CFA and testing a nomological network
Study 2 employed the same procedure as in Study 1. The goal of this study is to use a new sample to evaluate the updated questions and to test for internal discriminant validity by examining the correlations between factors within each context. Additionally, Study 2 evaluates a nomological network of concepts [34] surrounding DOI.
Testing a nomological network
A nomological network is an interconnected system of theoretical constructs, observed variables and their relationships that allows researchers to test whether a measurement scale behaves as theory predicts [34]. In our context, testing the nomological network means examining whether our six DOI characteristics relate to other constructs (innovation use, algorithm awareness) in theoretically expected ways. This approach is crucial for scale validation because it demonstrates that our measures capture the intended theoretical constructs rather than other confounding factors. If our scale validly measures DOI characteristics, we should observe the pattern of relationships that DOI theory predicts. According to DOI theory, relative advantage, compatibility, complexity (reversed), trialability, observability, and reinvention should be positively associated with innovation use (in this case, the use of image recognition technology; H1) [2]. If these six DOI characteristics are not positively associated with innovation use, it could suggest measurement issues, contextual factors unique to image recognition technology, or potential moderating variables that influence these relationships.
It is also important to ensure that the scale does not measure similar, but potentially confounding constructs [35], which we call external discriminant validity. One such construct that is different from the six innovation characteristics in our scale is the user’s understanding of how the technology works, which, in the case of software-based innovations, is the user’s algorithm awareness. Algorithm awareness refers to a user’s awareness of the underlying mechanisms and factors that influence how a particular software functions. While this awareness might inform users’ perceptions of an innovation, it is distinct from the six DOI characteristics as they have to do with users’ subjective interpretation of the innovation. Therefore, we propose that relative advantage, compatibility, complexity, trialability, observability, and reinvention will not be more than moderately correlated (i.e., r > 0.30; [37] with algorithmic awareness (H2). We chose this threshold based on Cohen’s (1988) guidelines for interpreting effect sizes in social sciences, where correlations of 0.10, 0.30, and 0.50 are considered small, medium, and large, respectively.
Finally, to assess construct validity, we will examine perceptions of relative advantage, compatibility, complexity, trialability, observability, and reinvention across various software innovations. As mentioned previously, we expect the constructs associated with DOI to vary across different contexts, requiring customization. Therefore, we expect individuals to perceive these attributes differently depending on the innovation (H3). If we observe no significant differences between contexts, it may suggest that our scale lacks sensitivity to the unique characteristics of different software innovations or that the software innovations are perceived as having similar attributes.
Methods
Participants
We recruited 851 participants from Prolific, a web-based survey platform, aiming for a sample representative of the U.S. population in June of 2023. The sample was diverse in terms of gender (41% women, 56% men, 3% agender, transgender, non-binary, or unspecified), age (M = 36.17, SD = 12.47), and ethnicity (22% Asian, 26.6% Black/African American, 18% Latino, 24% White, 6% mixed White/Latino, 3.4% mixed race). Participants reported a median income between $50,000-$59,999, and the median education level was a bachelor’s degree. For more information about Prolific participants, see [36,37].
This research was reviewed and approved by the University of California, Santa Barbara Human Subjects Committee (Protocol #4-23-0221). The study was determined to be exempt under Category 2 of 45 CFR 46.104(d), which covers survey procedures involving adult participants. All participants provided written informed consent prior to participation.
Procedure
We asked participants about their perceptions of the six DOI characteristics across the same four image recognition-based software innovations in Study 1: phone unlock, social media filters, finances, and image sensors. We conducted CFA with maximum likelihood estimation using a variance-covariance matrix on Mplus [30]. We specified the CFA model with the same structure used in Study 1. We used standard fit criteria, considering models with a SRMR ≤ .08, a CFI and TLI ≥ .95, and an RMSEA < 0.08 a good fit [31–33].
Measures
All measures of the perceptual DOI characteristics in Study 2 are the same as Study 1 with some exceptions, which we noted in the results section of Study 1. In addition to the changes previously described, we dropped parentheticals from all questions to enhance question clarity, for example the question “[innovation] fits (would fit) well with the way that I like to [task]” has been changed to “[innovation] fits well with the way that I like to [task]”. Study 2 included two additional measures: innovation use and algorithm awareness. Table 5 provides Cronbach’s alpha, means, and standard deviations for the DOI measures.
[Figure omitted. See PDF.]
Innovation use
We assessed the use of image recognition technology by asking participants how often they use 10 facial and image recognition technologies related to the four innovations (e.g., “In general, how often do you use the following technologies: e.g., automatic water dispenser”). We measured responses on a 5-point scale from never to always. (Phone unlock M = 2.93, SD = 1.69, finances M = 2.54, SD = 1.62, social media filters, M = 2.44, SD = 1.30, image sensors, M = 3.01, SD = 0.81).
Algorithm awareness
We assessed participants’ algorithm awareness by asking how 10 different factors—5 factors for image sensing technology and 5 for facial recognition technology—influence the output of facial recognition and image sensing technology. Algorithm awareness was divided into two categories: facial recognition algorithm awareness and image sensing algorithm awareness. Both facial recognition technology and image sensing technology are influenced by similar, but not always the same, underlying factors. For example, facial recognition technology is influenced by physical features such as face shape, while image sensing technology is not. We adapted the scale prompt, “Generally speaking, how much INFLUENCE do you think the following factors have on the output or results of a [facial recognition or image sensing] algorithm,” from [38], basing the answers on current literature that discusses the factors influencing facial recognition technology and image sensing technology. Example items include: “Lighting conditions of the environment” and “Other phenotypical features, such as your face shape.” We scored responses on a 5-point scale from (1) strongly disagree to (5) strongly agree. (Facial Recognition Algorithm Awareness, α = .68, M = 3.53, SD = 0.78; Image Sensing Algorithm Awareness, α = 0.68, M = 3.62, SD = 0.78).
Results
CFA
As seen in Table 6, the CFAs on the second sample had a good fit, with an SRMR ≤ .08, a CFI and TLI ≥ . 95, and an RMSEA < 0.08 for all four contexts (refer to Table 7 for factor loadings).
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Internal discriminant validity
To test for internal discriminant validity, we examined the correlations between differing factors, which must not be too high [39]. Generally, a correlation of.85 or larger in absolute value indicates poor discriminant validity [33].
Overall, the scale had good discriminant validity, with all factor correlations within each context being less than.85 (see Table 8), with the notable exception of relative advantage and compatibility in the finances and phone unlock contexts, which are correlated at a level of.928 and.904 respectively. In cases where two factors correlate more than.85, it is acceptable to consider them as part of a single scale, even though they function effectively as separate constructs. Given the high correlations between relative advantage and compatibility in the finances and phone unlock contexts, we conducted an additional CFA for each of these two contexts to see if combining relative advantage and compatibility would significantly improve model fit. The resulting fit statistics were similar to, and in the case of phone unlock, slightly worse than having relative advantage as separate factors. (Phone Unlock: χ2(67) = 693.681 (p < .001), RMSEA = .068, CFI = .946, TLI = .935 SRMR = .047; Finances: χ2(67) = 594.921 (p < .001), RMSEA = .062, CFI = .96, TLI = .95, SRMR = .041).
[Figure omitted. See PDF.]
Moreover, conceptually and theoretically, these factors are distinct; the scale performs well with them being treated as different entities and they only have a correlation > .85 in two out of the four technological contexts. The high correlation suggests a strong relationship but does not diminish their individual theoretical significance or practical utility in measuring separate aspects of the construct. Thus, we have decided to leave them as separate factors going forward.
Relationships within a nomological network
We assessed predictive validity (H1) by correlating scores from the six DOI characteristics with use of image recognition technology. For each model, we used either relative advantage, compatibility, complexity, observability, trialability, or reinvention (independent variables) to predict the use of facial recognition software to unlock a phone, facial recognition software to unlock a financial account, social media facial filters, or image sensors (dependent variables) using a linear regression. Consistent with our prediction, relative advantage, compatibility, trialability, observability, complexity, and reinvention significantly positively predict the use of image recognition-based software innovations (See Table 9).
[Figure omitted. See PDF.]
To assess external discriminant validity (H2), we examined correlations between the six DOI characteristics and algorithm awareness. We ran a correlation analysis to examine the relationships between the facial recognition contexts (i.e., phone unlock, social media filters, and finances) and facial recognition algorithm awareness. Similarly, we analyzed the image sensing context and its correlation with image sensing algorithm awareness.
None of the facial recognition factors were significantly associated with facial recognition algorithm awareness. Only trialability and observability of image sensing technologies were significantly correlated with image recognition algorithm awareness (See Table 10), with both having notably small effect sizes (i.e., r < 0.30; Cohen, 1988), thus providing support for the scale’s external discriminant validity and supporting H2.
[Figure omitted. See PDF.]
Finally, to test H3, we conducted ANOVAs with Tukey HSD post hoc tests to determine if there were significant mean differences between the various software innovations. Each ANOVA tested the mean differences between one DOI attribute and perceptions of four different software innovations: image sensors, finance applications, phone unlocking features, and social media filters.
We observed a significant effect of innovation type on relative advantage (F(3, 3379) = 66.58, p < .000, η2 = .056) with significant differences in perceptions of relative advantage between all innovations: image sensors and finances (p < .001), phone unlock and finances (p < .001), social media and finances (p < .001), phone unlock and image sensors (p < .001), social media and image sensors (p < .001), and social media and phone unlock (p < .001).
We observed a significant effect of innovation type on compatibility (F(3,3366) = 114.7, p < .000, η2 = .093) with significant differences in perceptions of compatibility between all innovations: image sensors and finances (p < .001), phone unlock and finances (p < .001), social media and finances (p < .001), phone unlock and image sensors (p < .001), social media and image sensors (p < .001), and social media and phone unlock (p < .001).
We observed a significant effect of innovation type on complexity (F(3, 3377) = 18.73, p < .000, η2 = .016) with significant differences between perceptions of complexity between image sensors and finances (p < .001), phone unlock and finances (p < .001), social media and image sensors (p < .001), and social media and phone unlock (p < .01). There were no significant differences between social media and finances (p = .08) or phone unlock and image sensors (p = .73).
We observed a significant effect of innovation type on trialability (F(3, 3382) = 49.08, p < .000, η2 = .042) with significant mean differences between image sensors and finances (p < .001), phone unlock and finances (p < .001), social media and finances (p < .001), and social media and image sensors (p < .05). There were no significant differences between phone unlock and image sensors and finances (p = .46) or social media and phone unlock (p = .60).
We observed a significant effect of innovation type on observability (F(3, 3385) = 166.1, p < .000, η2 = 0.128) with significant mean differences between all pairs of variables: image sensors and finances (p < .001), phone unlock and finances (p < .001), social media and finances (p < .001), phone unlock and image sensors (p < .001), social media and image sensors (p < .001), except for social media and phone unlock (p = .28).
We observed a significant effect of innovation type on reinvention (F(3, 3379) = 3.423, p = .016, η2 = .003) with significant mean differences between image sensors and finances (p < .05), but not any of the other variable pairs: phone unlock and finances (p = .99), social media and finances (p = .25), phone unlock and image sensors (p = .058), social media and image sensors (p = .82), social media and phone unlock (p = .34).
Study 2 discussion
Study 2 supports the six-factor measure of the perceptual DOI characteristics. The fit statistics of the CFAs meet the standard fit criteria and generally have good discriminant validity. This stance is further corroborated by all item loadings being greater than.6, and Cronbach’s alphas exceeding.75 for all six perceptual characteristics. Furthermore, study 2 offers support for the validity of the scale within a nomological network. All factors significantly predicted the use of software-based innovations (H1) and were not significantly correlated with algorithm awareness (H2). Participants also displayed different mean ratings for relative advantage, compatibility, complexity, trialability, observability, and reinvention across various innovations (H3).
Study 3
Study 3 replicates the analysis from Study 2. Due to significant differences between the populations in our first and second samples—both in sample size and demographic distribution—and the absence of discriminant validity assessments in sample 1 or an evaluation of the nomological network of variables related to the diffusion of innovations, we collected a third sample to replicate the results of Study 2.
Methods
Participants
We recruited 843 participants for this study in May of 2024. The sample was diverse in terms of gender (48% women, 51% men, 1% non-binary or unspecified), age (M = 36.75, SD = 11.4), and ethnicity (22.1% Asian, 27.8% Black/African American, 18% Latino, 23.2% White, 5.2% mixed White/Latino, 3.7% mixed race). Participants reported a median income between $60,000-$69,999, and the median education level was a bachelor’s degree.
This research was reviewed and approved by the University of California, Santa Barbara Human Subjects Committee (Protocol #4-23-0221). The study was determined to be exempt under Category 2 of 45 CFR 46.104(d), which covers survey procedures involving adult participants. All participants provided written informed consent prior to participation.
Procedure
Study 3 follows the same procedure as study 2.
Measures
All measures in Study 3 are the same as Study 2. Table 11 provides Cronbach’s alpha, means, and standard deviations for all measures in study 3.
[Figure omitted. See PDF.]
Results
CFA
As seen in Table 13, the results of the CFA analysis suggest satisfactory models on all goodness of fit statistics with the SRMR less than.08, the RMSEA falling below.08, and TLI/CFI values falling above or equal to.95 across all models except image sensors, where the CFI and TLI are.945 and.931 respectively. Although these values are slightly below the ideal threshold, values above 0.90 are still considered indicative of an acceptable model fit (Brown, 2006), especially when supported by other acceptable fit indices. Fit indices should be interpreted as continuous indicators of model fit rather than as rigid thresholds for accepting or rejecting a model. Refer to Table 12 for updated factor loadings.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Internal discriminant validity
The scale has good internal discriminant validity, with all factor correlations within each context being less than.85 (see Table 14), with the exception of relative advantage and compatibility in the finances and phone unlock contexts, as with sample 2, which are correlated at a level of.945 and.918 respectively.
[Figure omitted. See PDF.]
Relationships within a nomological network
We ran the same analysis in sample 3 as in sample 2 to assess predictive validity. Consistent with our prediction, as seen in Table 15, relative advantage, compatibility, trialability, observability, complexity, and reinvention significantly positively predicted the use of image recognition-based software innovations.
[Figure omitted. See PDF.]
We ran the same analysis in sample 3 as in sample 2 to assess external discriminant validity. As seen in Table 16, reinvention was significantly positively associated with facial recognition algorithm awareness for all innovations, and observability of image sensors was significantly correlated with image recognition algorithm awareness. All other variables were not significantly correlated to either facial recognition algorithm awareness or image sensor algorithm awareness. As with sample 2, all significant correlations had small effect sizes having (i.e., r < 0.30; [37]), thus providing support for H3.
[Figure omitted. See PDF.]
We applied the same analysis to test H3 as we did in Study 2.
We observed a significant effect of innovation type on relative advantage (F(3, 3350) = 77.48, p < .000, η2 = .055) with significant differences in perceptions of relative advantage between all innovations ([image sensors & finances] p < .001; [phone unlock & finances] p < .001; [social media & finances] p < .001; [social media & image sensors] p < .001; [social media & phone unlock] p < .001) except for phone unlock and image sensors (p = .98).
We observed a significant effect of innovation type on compatibility (F(3,3341) = 110.7, p < .000, η2 = .090) with significant differences in perceptions of compatibility between all innovations: image sensors and finances (p < .001), phone unlock and finances (p < .001), social media and finances (p < .001), social media and image sensors (p < .001); social media and phone unlock (p < .001), and phone unlock and image sensors (p < .01).
We observed a significant effect of innovation type on complexity (F(3, 3352) = 17.04, p < .000, η2 = .015) with significant differences between perceptions of image sensors and finances (p < .001), phone unlock and finances (p < .001), social media and image sensors (p < .001), and social media and phone unlock (p < .001). However, there were no significant differences between social media and finances (p = .99) or phone unlock and image sensors (p = .59).
We observed a significant effect of innovation type on trialability (F(3, 3352) = 32.24, p < .000, η2 = .028) with significant mean differences between image sensors and finances (p < .001), phone unlock and finances (p < .001), social media and finances (p < .001), and social media and phone unlock (p < .05). However, there were no significant differences between phone unlock and image sensors (p = .49) or social media and image sensors (p = .60).
We observed a significant effect of innovation type on observability (F(3, 3356) = 155.8, p < .000, η2 = .12) with significant mean differences between all pairs of variables: image sensors and finances (p < .001), phone unlock and finances (p < .001), social media and finances (p < .001), phone unlock and image sensors (p < .001), social media and image sensors (p < .001), except social media and phone unlock (p = .26).
We observed a significant effect of innovation type on reinvention (F(3, 3352) = 10.19, p = < .0000, η2 = .009) with significant mean differences between image sensors and finances (p < .001), social media and finances (p < .05), phone unlock and image sensors (p < .001), and social media and phone unlock (p < .05). However there were no significant differences between phone unlock and finances (p = .96) and social media and image sensors (p = .30).
Discussion
This paper develops a scale to measure the six perceptual characteristics outlined by DOI to assess people’s perceived relative advantage, compatibility, complexity, trialability, observability, and reinvention that is adaptable across a variety of innovations from hardware-based to software-based innovations. Although DOI theory and its related constructs are well-established, researchers have struggled to consistently measure DOI and often fail to integrate reinvention. Previous scales measuring these characteristics either focused solely on one innovation, were not designed with all six perceptual characteristics in mind, or were not tested on software contexts. A strength of DOI theory lies in its ability to measure human perceptions of innovations across contexts; yet there has not yet been a scale developed for measuring DOI that is flexible to the wide range of innovations that we have seen and will continue to see as new technologies are created and old technologies are reformed.
Across three studies, we demonstrate that our six-factor structured questionnaire can be applied across different innovations with strong reliabilities (Cronbach’s alpha > .75 for all six perceptual characteristics across all innovations). The questionnaire was effectively situated within a nomological network. The DOI scale predicted innovation use, was conceptually unrelated to algorithm awareness, and revealed that participants perceived each innovation differently. A final version of the DOI scale can be found in Table 17 and in S1 Table 1, with questions that are ready to be adapted to a variety of contexts, especially those that are software-based.
[Figure omitted. See PDF.]
The final version of the scale contains three to four questions for each DOI characteristic. The questions demonstrate discriminant validity (meaning each factor is distinct from theoretically unrelated constructs) while also showing predictive validity by significantly predicting innovation use. Critically, the items are designed to allow researchers to apply questions across different types of innovations, including those that are software-based. Adapting the scale only requires the researcher to identify the innovation (e.g., facial recognition technology) and the task the innovation is designed to accomplish (e.g., unlocking my phone). Our study develops, tests, and adapts the scale using four types of image recognition software: facial recognition to unlock a phone, facial recognition to unlock a financial account, social media facial filters, and image sensors; however, the scale can also be adapted to other innovations (e.g., “[Facebook] allows me to accomplish tasks, such as [contacting my friends], more efficiently”). Other applications could consist of assessing the diffusion of new algorithms (e.g., predictive text algorithms) or emerging media (e.g., virtual environments). Our validated scale ensures DOI theory remains methodologically equipped for age of artificial intelligence by providing researchers with a standardized tool to measure all six theoretical constructs across contexts.
This study is not without limitations. While the scale generally exhibited good internal discriminant validity, relative advantage and compatibility had a factor correlation greater than.85 across both samples in the finances and phone unlock innovation contexts. Within the context of diffusion of innovations, compatibility and relative advantage are the most similar perceptual innovation characteristics, so a high correlation is expected. Yet, they are conceptually distinct constructs and load well onto separate factors, therefore we have decided to keep them separate. Future studies should further investigate this by measuring an innovation where one would expect the compatibility of the innovation to be different from its perceived relative advantage. While our goal was to develop a DOI scale that is flexible to the range of innovations people use in their daily lives, we acknowledge that this scale may require further customization to be compatible with a given innovation.
Ultimately, this research develops a systematic approach for investigating the role of innovation characteristics in the diffusion of innovations that accommodates the study of hardware and software innovations and encompasses all six perceptual characteristics that contribute to innovation adoption (relative advantage, compatibility, complexity, trialability, observability, and reinvention). Across three studies, we constructed a validated and reliable scale to measure six DOI characteristics, with particular attention to the reinvention construct an often overlooked but conceptually critical component of diffusion theory. As algorithm-driven technologies such as artificial intelligence become pervasive, we anticipate that the capacity to modify or personalize these innovations will increasingly contribute to adoption decisions. By offering a generalizable tool grounded in psychometric evidence, this paper contributes both methodologically and theoretically to the study of how innovations are reshaped in practice. Future research can use this scale to investigate reinvention across diverse technologies, contexts, and populations.
Supporting information
S1 Table. Final Diffusion of Innovations Scale.
The final validated scale items for measuring all six DOI attributes.
https://doi.org/10.1371/journal.pone.0334616.s001
(DOCX)
S2 File. Initial Items used to make adaptable diffusion of innovations scale.
The list of initial items used to measure all six DOI attributes across all contexts.
https://doi.org/10.1371/journal.pone.0334616.s002
(DOCX)
References
1. 1. Valkenburg PM, Oliver MB. Media effects theories: An overview. Media effects: Advances in theory and research. 4th ed. London: Routledge. 2019. p. 16–35.
2. 2. Rogers EM. Diffusion of Innovations. 5th ed. New York: Free Press.
3. 3. Nehme EK, Pérez A, Ranjit N, Amick BC III, Kohl HW III. Behavioral theory and transportation cycling research: Application of Diffusion of Innovations. Journal of Transport & Health. 2016;3(3):346–56.
* View Article
* Google Scholar
4. 4. Shelomi M. Why we still don’t eat insects: Assessing entomophagy promotion through a diffusion of innovations framework. Trends in Food Science & Technology. 2015;45(2):311–8.
* View Article
* Google Scholar
5. 5. Talebian A, Mishra S. Predicting the adoption of connected autonomous vehicles: A new approach based on the theory of diffusion of innovations. Transportation Research Part C: Emerging Technologies. 2018;95:363–80.
* View Article
* Google Scholar
6. 6. Kapoor KK, Dwivedi YK, Williams MD. Rogers’ Innovation Adoption Attributes: A Systematic Review and Synthesis of Existing Research. Information Systems Management. 2014;31(1):74–91.
* View Article
* Google Scholar
7. 7. de Vries H, Tummers L, Bekkers V. The Diffusion and Adoption of Public Sector Innovations: A Meta-Synthesis of the Literature. Perspectives on Public Management and Governance. 2018;1(3):159–76.
* View Article
* Google Scholar
8. 8. Finney Rutten LJ, Nelson DE, Meissner HI. Examination of population-wide trends in barriers to cancer screening from a diffusion of innovation perspective (1987-2000). Prev Med. 2004;38(3):258–68. pmid:14766107
* View Article
* PubMed/NCBI
* Google Scholar
9. 9. Min S, So KKF, Jeong M. Consumer adoption of the Uber mobile application: Insights from diffusion of innovation theory and technology acceptance model. J Travel Tourism Marketing. 2018;36(7):770–83.
* View Article
* Google Scholar
10. 10. Frei-Landau R, Muchnik-Rozanov Y, Avidov-Ungar O. Using Rogers’ diffusion of innovation theory to conceptualize the mobile-learning adoption process in teacher education in the COVID-19 era. Educ Inf Technol (Dordr). 2022;27(9):12811–38. pmid:35702319
* View Article
* PubMed/NCBI
* Google Scholar
11. 11. Yi MY, Fiedler KD, Park JS. Understanding the Role of Individual Innovativeness in the acceptance of IT‐Based Innovations: Comparative Analyses of Models and Measures*. Decision Sciences. 2006;37(3):393–426.
* View Article
* Google Scholar
12. 12. Moore GC, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Res. 1991;2(3):192–222.
* View Article
* Google Scholar
13. 13. Lin H-F. An empirical investigation of mobile banking adoption: The effect of innovation attributes and knowledge-based trust. International Journal of Information Management. 2011;31(3):252–60.
* View Article
* Google Scholar
14. 14. Atkinson NL. Developing a Questionnaire to Measure Perceived Attributes of eHealth Innovations. am j health behav. 2007;31(6):612–21.
* View Article
* Google Scholar
15. 15. Sanson-Fisher RW. Diffusion of innovation theory for clinical change. Med J Aust. 2004;180(S6):S55-6. pmid:15012582
* View Article
* PubMed/NCBI
* Google Scholar
16. 16. Huang L-Y, Hsieh Y-J. Consumer electronics acceptance based on innovation attributes and switching costs: The case of e-book readers. Electronic Commerce Research and Applications. 2012;11(3):218–28.
* View Article
* Google Scholar
17. 17. Lavoie AL, Dentzman K, Wardropper CB. Using diffusion of innovations theory to understand agricultural producer perspectives on cover cropping in the inland Pacific Northwest, USA. Renew Agric Food Syst. 2021;36(4):384–95.
* View Article
* Google Scholar
18. 18. McDaniel B, Rice RE. Managing organizational innovation: The evolution from word processing to office information systems. Columbia University Press. 1987.
19. 19. Rice RE, Rogers EM. Reinvention in the Innovation Process. Knowledge. 1980;1(4):499–514.
* View Article
* Google Scholar
20. 20. Rice RE, Zane T, Hoffmann H. Attention in business press to the diffusion of attention technologies, 1990–2017. 2018;26.
21. 21. Hays SP. Influences on Reinvention During the Diffusion of Innovations. Political Res Quarterly. 1996;49(3):631–50.
* View Article
* Google Scholar
22. 22. Koren H, Kaminer I, Raban DR. Exploring the effect of reinvention on critical mass formation and the diffusion of information in a social network. Soc Netw Anal Min. 2014;4(1).
* View Article
* Google Scholar
23. 23. Tornatzky LG, Klein KJ. Innovation characteristics and innovation adoption-implementation: A meta-analysis of findings. IEEE Trans Eng Manage. 1982;EM-29(1):28–45.
* View Article
* Google Scholar
24. 24. Hahn CL. Attributes and adoption of new social studies materials. Theory & Research in Social Education. 1977;5(1):19–40.
* View Article
* Google Scholar
25. 25. Fedorowicz J, Gogan JL. Reinvention of interorganizational systems: A case analysis of the diffusion of a bio-terror surveillance system. Inf Syst Front. 2010;12(1):81–95. pmid:32214878
* View Article
* PubMed/NCBI
* Google Scholar
26. 26. Boudreau M-C, Robey D. Enacting integrated information technology: a human agency perspective. Organization Science. 2005;16(1):3–18.
* View Article
* Google Scholar
27. 27. Shen L, Sun Y, Jürgens P, Zhou B, Bachl M. Taking communication science and research methodology seriously. Communication Methods and Measures. 2024;18(1):1–6.
* View Article
* Google Scholar
28. 28. Muthén LK, Muthén BO. How to use a monte carlo study to decide on sample size and determine Power. Structural Equation Modeling: A Multidisciplinary J. 2002;9(4):599–620.
* View Article
* Google Scholar
29. 29. Webster CA, Mîndrilă D, Moore C, Stewart G, Orendorff K, Taunton S. Measuring and comparing physical education teachers’ perceived attributes of cspaps: an innovation adoption perspective. J Teaching in Physical Education. 2020;39(1):78–90.
* View Article
* Google Scholar
30. 30. Muthén L, Muthén B. Mplus User’s Guide. Los Angeles: Muthén & Muthén. 1998.
31. 31. Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ. Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychological Methods. 1999;4:272–99.
* View Article
* Google Scholar
32. 32. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary J. 1999;6(1):1–55.
* View Article
* Google Scholar
33. 33. Brown TA. Confirmatory factor analysis for applied research. Guilford Publications. 2015.
34. 34. Cronbach LJ, Meehl PE. Construct validity in psychological tests. Psychol Bull. 1955;52(4):281–302. pmid:13245896
* View Article
* PubMed/NCBI
* Google Scholar
35. 35. MacKenzie, Podsakoff, Podsakoff. Construct Measurement and Validation Procedures in MIS and Behavioral Research: Integrating New and Existing Techniques. MIS Quarterly. 2011;35(2):293.
* View Article
* Google Scholar
36. 36. Douglas BD, Ewell PJ, Brauer M. Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA. PLoS One. 2023;18(3):e0279720. pmid:36917576
* View Article
* PubMed/NCBI
* Google Scholar
37. 37. What are the advantages and limitations of an online sample?. https://researcher-help.prolific.com/hc/en-gb/articles/360009501473-What-are-the-advantages-and-limitations-of-an-online-sample- Accessed 2023 October 1.
38. 38. Cotter K. Algorithmic knowledge gaps: A new dimension of (digital) inequality. Int Journal of Communication. 2020;14.
* View Article
* Google Scholar
39. 39. Kline RB. Principles and Practice of Structural Equation Modeling. Guilford Publications. 2023.
Citation: Overbye-Thompson H, Hamilton KA (2025) A diffusion of innovations measurement scale for reinvention, relative advantage, compatibility, complexity, trialability and observability. PLoS One 20(10): e0334616. https://doi.org/10.1371/journal.pone.0334616
About the Authors:
Hannah Overbye-Thompson
Roles: Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Writing – original draft, Writing – review & editing
E-mail: [email protected]
Current address: Department of Communication – UC Santa Barbara, 4005 Social Sciences & Media Studies, UC Santa Barbara
Affiliation: University of California Santa Barbara, Santa Barbara, United States of America
ORICD: https://orcid.org/0000-0001-5235-9486
Kristy A. Hamilton
Roles: Funding acquisition, Supervision, Writing – review & editing
Affiliation: University of California Santa Barbara, Santa Barbara, United States of America
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
1. Valkenburg PM, Oliver MB. Media effects theories: An overview. Media effects: Advances in theory and research. 4th ed. London: Routledge. 2019. p. 16–35.
2. Rogers EM. Diffusion of Innovations. 5th ed. New York: Free Press.
3. Nehme EK, Pérez A, Ranjit N, Amick BC III, Kohl HW III. Behavioral theory and transportation cycling research: Application of Diffusion of Innovations. Journal of Transport & Health. 2016;3(3):346–56.
4. Shelomi M. Why we still don’t eat insects: Assessing entomophagy promotion through a diffusion of innovations framework. Trends in Food Science & Technology. 2015;45(2):311–8.
5. Talebian A, Mishra S. Predicting the adoption of connected autonomous vehicles: A new approach based on the theory of diffusion of innovations. Transportation Research Part C: Emerging Technologies. 2018;95:363–80.
6. Kapoor KK, Dwivedi YK, Williams MD. Rogers’ Innovation Adoption Attributes: A Systematic Review and Synthesis of Existing Research. Information Systems Management. 2014;31(1):74–91.
7. de Vries H, Tummers L, Bekkers V. The Diffusion and Adoption of Public Sector Innovations: A Meta-Synthesis of the Literature. Perspectives on Public Management and Governance. 2018;1(3):159–76.
8. Finney Rutten LJ, Nelson DE, Meissner HI. Examination of population-wide trends in barriers to cancer screening from a diffusion of innovation perspective (1987-2000). Prev Med. 2004;38(3):258–68. pmid:14766107
9. Min S, So KKF, Jeong M. Consumer adoption of the Uber mobile application: Insights from diffusion of innovation theory and technology acceptance model. J Travel Tourism Marketing. 2018;36(7):770–83.
10. Frei-Landau R, Muchnik-Rozanov Y, Avidov-Ungar O. Using Rogers’ diffusion of innovation theory to conceptualize the mobile-learning adoption process in teacher education in the COVID-19 era. Educ Inf Technol (Dordr). 2022;27(9):12811–38. pmid:35702319
11. Yi MY, Fiedler KD, Park JS. Understanding the Role of Individual Innovativeness in the acceptance of IT‐Based Innovations: Comparative Analyses of Models and Measures*. Decision Sciences. 2006;37(3):393–426.
12. Moore GC, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Res. 1991;2(3):192–222.
13. Lin H-F. An empirical investigation of mobile banking adoption: The effect of innovation attributes and knowledge-based trust. International Journal of Information Management. 2011;31(3):252–60.
14. Atkinson NL. Developing a Questionnaire to Measure Perceived Attributes of eHealth Innovations. am j health behav. 2007;31(6):612–21.
15. Sanson-Fisher RW. Diffusion of innovation theory for clinical change. Med J Aust. 2004;180(S6):S55-6. pmid:15012582
16. Huang L-Y, Hsieh Y-J. Consumer electronics acceptance based on innovation attributes and switching costs: The case of e-book readers. Electronic Commerce Research and Applications. 2012;11(3):218–28.
17. Lavoie AL, Dentzman K, Wardropper CB. Using diffusion of innovations theory to understand agricultural producer perspectives on cover cropping in the inland Pacific Northwest, USA. Renew Agric Food Syst. 2021;36(4):384–95.
18. McDaniel B, Rice RE. Managing organizational innovation: The evolution from word processing to office information systems. Columbia University Press. 1987.
19. Rice RE, Rogers EM. Reinvention in the Innovation Process. Knowledge. 1980;1(4):499–514.
20. Rice RE, Zane T, Hoffmann H. Attention in business press to the diffusion of attention technologies, 1990–2017. 2018;26.
21. Hays SP. Influences on Reinvention During the Diffusion of Innovations. Political Res Quarterly. 1996;49(3):631–50.
22. Koren H, Kaminer I, Raban DR. Exploring the effect of reinvention on critical mass formation and the diffusion of information in a social network. Soc Netw Anal Min. 2014;4(1).
23. Tornatzky LG, Klein KJ. Innovation characteristics and innovation adoption-implementation: A meta-analysis of findings. IEEE Trans Eng Manage. 1982;EM-29(1):28–45.
24. Hahn CL. Attributes and adoption of new social studies materials. Theory & Research in Social Education. 1977;5(1):19–40.
25. Fedorowicz J, Gogan JL. Reinvention of interorganizational systems: A case analysis of the diffusion of a bio-terror surveillance system. Inf Syst Front. 2010;12(1):81–95. pmid:32214878
26. Boudreau M-C, Robey D. Enacting integrated information technology: a human agency perspective. Organization Science. 2005;16(1):3–18.
27. Shen L, Sun Y, Jürgens P, Zhou B, Bachl M. Taking communication science and research methodology seriously. Communication Methods and Measures. 2024;18(1):1–6.
28. Muthén LK, Muthén BO. How to use a monte carlo study to decide on sample size and determine Power. Structural Equation Modeling: A Multidisciplinary J. 2002;9(4):599–620.
29. Webster CA, Mîndrilă D, Moore C, Stewart G, Orendorff K, Taunton S. Measuring and comparing physical education teachers’ perceived attributes of cspaps: an innovation adoption perspective. J Teaching in Physical Education. 2020;39(1):78–90.
30. Muthén L, Muthén B. Mplus User’s Guide. Los Angeles: Muthén & Muthén. 1998.
31. Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ. Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychological Methods. 1999;4:272–99.
32. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary J. 1999;6(1):1–55.
33. Brown TA. Confirmatory factor analysis for applied research. Guilford Publications. 2015.
34. Cronbach LJ, Meehl PE. Construct validity in psychological tests. Psychol Bull. 1955;52(4):281–302. pmid:13245896
35. MacKenzie, Podsakoff, Podsakoff. Construct Measurement and Validation Procedures in MIS and Behavioral Research: Integrating New and Existing Techniques. MIS Quarterly. 2011;35(2):293.
36. Douglas BD, Ewell PJ, Brauer M. Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA. PLoS One. 2023;18(3):e0279720. pmid:36917576
37. What are the advantages and limitations of an online sample?. https://researcher-help.prolific.com/hc/en-gb/articles/360009501473-What-are-the-advantages-and-limitations-of-an-online-sample- Accessed 2023 October 1.
38. Cotter K. Algorithmic knowledge gaps: A new dimension of (digital) inequality. Int Journal of Communication. 2020;14.
39. Kline RB. Principles and Practice of Structural Equation Modeling. Guilford Publications. 2023.
© 2025 Overbye-Thompson, Hamilton. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.