Content area
This paper considers a new perspective on the changing landscape of Information Systems (IS) education: the sustainability of IS degree programs. Some IS programs have struggled to balance uneven student enrollments with evolving employer needs. They are often misunderstood or mistaken for other computing degrees, and some are 1n business colleges whose deans consider them less important than other business degrees. Many IS programs have successfully confronted such challenges, but there are fewer IS programs in business schools now than there were a decade ago. Some have been eliminated, merged with other programs, reconceptualized, or moved to other colleges, and the continuance of others 1s at risk. Program sustainability frameworks have emerged in other fields to understand why some programs are more durable than others. In this paper, we explore the potential of using a sustainability framework developed for healthcare as a starting point for developing a program sustainability framework for IS education. We show that even modest modifications to the framework's assessment tool can shed light on factors related to IS degree programs" long-term success and that some of the framework's sustainability determinants may apply to IS programs. Some of the work needed to develop a framework and assessment tool for IS education is described and some of the ways a framework and assessment tool might be used by programs and IS education researchers are identified.
ABSTRACT
This paper considers a new perspective on the changing landscape of Information Systems (IS) education: the sustainability of IS degree programs. Some IS programs have struggled to balance uneven student enrollments with evolving employer needs. They are often misunderstood or mistaken for other computing degrees, and some are 1n business colleges whose deans consider them less important than other business degrees. Many IS programs have successfully confronted such challenges, but there are fewer IS programs in business schools now than there were a decade ago. Some have been eliminated, merged with other programs, reconceptualized, or moved to other colleges, and the continuance of others 1s at risk. Program sustainability frameworks have emerged in other fields to understand why some programs are more durable than others. In this paper, we explore the potential of using a sustainability framework developed for healthcare as a starting point for developing a program sustainability framework for IS education. We show that even modest modifications to the framework's assessment tool can shed light on factors related to IS degree programs" long-term success and that some of the framework's sustainability determinants may apply to IS programs. Some of the work needed to develop a framework and assessment tool for IS education is described and some of the ways a framework and assessment tool might be used by programs and IS education researchers are identified.
Keywords: Program assessment & design, IS education, IS programs, Sustainable development goals
1. INTRODUCTION
Across their histories, information systems (IS) degree programs have had to adapt to a changing discipline landscape. Change and adaptations to IS courses and degree programs have been hallmarks of IS education over the past three decades (Freeman & Taylor, 2019). Program adaptations have included creating new courses; modifying degree program curricula; creating minors, emphasis areas and sub-disciplines; and engaging in impactful research streams. Some programs have leveraged educational partnerships with software toolmakers such as IBM, Microsoft, Oracle, SAP, SAS, Tableau, and Teradata to modify their courses or curricula. Others have forged relationships with employers and other stakeholders to ensure curricular relevance and to enable students and faculty to be connected to the world outside the university classroom.
According to Freeman and Taylor (2019), such changes and adaptations have made IS education and programs very different from what they were three decades ago, and some programs have navigated the changing IS landscape more successfully than others. For some, the changes have been overwhelming, leading to elimination, merger with other programs or reconceptualization (e.g., as Informatics or Information Schools), while others continue to struggle. Boehler et al. (2020) report that the number of IS programs (majors, concentrations, minors) at AACSB-accredited business schools in the U.S. declined from 286 in 2011 (Bell et al. 2013) to 228 in 2019. They also note a decline in the number of IS programs meeting ACM IS 2010 Curriculum Guidelines. Similar declines have been observed for Australian universities (Richardson et al., 2018).
Zweben et al. (2021) report that from 2017 to 2020, IS degree program enrollments decreased by 8.5% and the number of IS bachelor's degree programs declined by 2.9%. In contrast, enrollments across computing degree programs increased by 9.7% and the total number of computing bachelor's degree programs increased by 5.7%. So, while opportunities for students to complete IS degrees in business schools have decreased, enrollments across computing disciplines have increased (Hol et al., 2024). This has occurred despite strong demand for graduates with IS competencies, and higher starting salaries than those of many other degrees, including most business degrees (Coursera, 2024; Mandviwalla et al., 2023).
Hol etal. (2024) contend that it is important for IS educators to come to grips with why IS program and student numbers are slipping even though job prospects for IS graduates are strong. Answering this question is important, especially for programs whose continuance is at risk. Possible causes of the declines include IS's relatively weak identity among computing disciplines (Babb et al., 2019), limited awareness or clarity about the IS discipline among individuals positioned to influence high school students' major and career choices (Burns ·t al., 2014), and competition from other computing disciplines, especially IT (Hol et al., 2024). However, these may only partially explain why some IS programs have more success navigating changes in the IS landscape than others. How do they differ from programs that struggle or no longer exist?
Lending et al. (2019) are some of the only IS education researchers who have attempted to consider IS education success from a program perspective. They describe an extensive self-examination of the CIS major at their university (James Madison University, JMU) which they consider a high-quality IS major. They identify and describe five contributing factors to the quality and long-term success of their program: (1) An integrated, rigorous curriculum with a strong technical foundation, (2) a strong community of faculty, students, alumni, and friends, (3) pedagogical scholarship, (4) commitment to assessment and continuous improvement, and (5) accreditation.
Lending et al. (2019) suggest that determinants of IS program sustainability may include:
* A quality curriculum that is domain-related and adaptive.
* The creation and maintenance of a positive community for students, faculty, and external program constituents (including alumni, employers, advisory board members, and sources of future students).
* Treating pedagogical research as legitimate, rather than as second-class research (research perceived as less valuable, less rigorous, or lacking in academic recognition), because it can contribute to a studentfocused culture.
* A commitment to assessment and using assessment results to improve student learning.
* Accreditation because it provides external validation of program quality. Lending et al. (2019) value ABET accreditation of JMU's CIS program because it requires working with stakeholders to delineate the program"s curriculum, desired outcomes, and adequacy of resources.
By focusing on a single IS degree program and university, the Lending et al. (2019) investigation is essentially a case study. Despite rich descriptions of the five ingredients, it is uncertain whether their recipe would work at other universities. However, the identified ingredients may have potential for inclusion as sustainability determinants in an IS program sustainability framework.
Declines in IS programs and student enrollments suggest that programs must work to ensure continued program relevance and success. Programs should remain vigilant and open to opportunities to improve. As Lending et al. (2019) note, it took many years for the CIS program at JMU to become a high-quality program, and the effort to maintain that quality is ongoing. This observation echoes program sustainability mindsets and frameworks that have emerged in other fields and we think that developing an IS program sustainability framework and assessment tool can contribute to better understanding of why some programs are more durable than others.
2. PROGRAM SUSTAINABILITY
Program sustainability has received considerable research attention in other fields, especially public health where it ·merged as a research topic in the late 1990s (e.g., ShediacRizkallah & Bone, 1998). It is an offshoot of implementation science that focuses on factors that promote long-term program viability post-implementation (Evashwick & Ory, 2003; Scheirer, 2005; Scheirer & Dearing, 2011).
Today, there is general agreement that program sustainability involves ensuring the continued use of program components, resources, and activities to meet stakeholder and community needs across time. Buck (2015) describes sustainable programs as: "Having the human, financial, technological, and organizational resources to provide services to meet needs and attain results towards mission on an ongoing basis; and acquiring the organizational and programmatic infrastructure to carry out core functions independent of individuals or one-time opportunities" (р. 2). Such characterization of program sustainability aligns with systems thinking espoused by Christensen and Raynor (2003). And Popin (2023) notes that systems thinking provides conceptual grounding for sustainable businesses, projects, programs, and other sustainability initiatives.
Like businesses, healthcare programs and organizations are usually characterized as open systems with identifiable inputs, outputs, and throughputs. They emerge in response to external needs, develop internal subsystems, processes, and operations to produce valued outputs (Katz & Kahn, 1966), and adapt their subsystems, processes, and operations in response to feedback and changing external and internal demands (Shelton et al., 2018). Like businesses and academic organizations, healthcare organizations and programs are affected by labor market conditions, legislation and government regulations, and population characteristics. Failure or inability to adapt to changes in their environments threatens their continued existence.
Schell et al. (2013) unveiled the Sustainability Framework to summarize sustainability determinants of public health programs. Although other sustainability frameworks exist (e.g., Buck, 2015; Office of Public Affairs, 2017), the Schell et al.'s (2013) Sustainability Framework is the most widely used in healthcare program sustainability research.
In their discussion of the Sustainability Framework, Schell et al. (2013) reference the CDC Framework for Program Evaluation in Public Health. The CDC framework (Centers for Disease Control and Prevention, 2024) proposes standards for assessing the quality and effectiveness of public health programs. It is based on the third edition of the Program Evaluation Standards published by the Joint Committee on Standards for Educational Evaluation (Yarbrough et al., 2010). This indicates that Schell et al. (2013) developed the Sustainability Framework as a program evaluation tool specifically for public health programs.
Program evaluation is a recognized discipline with its own body of knowledge, methodologies, and professional standards. It focuses on determining the merit, worth, and significance of a program, project, or policy by assessing its efficiency, effectiveness, and impact. Program evaluation is often considered transdisciplinary that develops tools and methodologies that can be applied to disciplines or domains outside those in which they are initially used.
The Sustainability Framework identifies eight determinants (see Figure 1 and Table 1) that affect a program's ability to deliver valued outcomes across time. These determinants are general enough to have potential applicability to programs 1n other fields, including IS.
3. APPLYING SUSTAINABILITY THINKING TO IS DEGREE PROGRAMS
We think that adapting the Sustainability Framework could fast-track the development of a customized program sustainability framework for IS education. As Elrod et al. (2022) note, there is a "need for new measures in evaluating MIS education" (p. 368) and we think that considering IS programs from a sustainability perspective can explain why some IS programs thrive while others struggle or die.
If challenged to consider the applicability of the Sustainability Framework to IS programs, we suspect that many IS educators would agree that programs are more likely to experience long-term success when they have institutional and external support; adequate funding, staffing, leadership, and resources; beneficial partnerships; effective program evaluation and planning processes; effective communication with decision-makers and stakeholders; and when they think strategically about how to best adapt to changing stakeholder needs. Many would also agree that developing and maintaining programs with these characteristics requires long-term thinking and commitment.
The value of long-term thinking about IS programs and curricula is supported by IS education researchers, including Lending et al. (2019). Fichman et al. (2014) advocate leveraging fundamental and powerful concepts (FPCs) to guide the evolution of IS curricula over the long-haul. Antonucci et al. (2004) also recognize the importance of longterm thinking when modifying IS curricula. This leads us to speculate that some IS educators may be receptive to considering IS programs and curricula from a sustainability perspective.
While we think there may be value in adapting the Sustainability Framework to facilitate the development of an IS program sustainability framework, we recognize that there are numerous issues associated with adapting theories, models, and frameworks from other disciplines for use in IS. It is important to acknowledge that public health and academic programs differ in key aspects:
* Public health programs focus on population health, disease prevention, and community well-being through research, policy, and community interventions. In contrast, academic degree programs emphasize knowledge acquisition and research, often preparing students for specialized careers or further study. They are designed for in-depth study and research in a specific academic discipline and seek to advance knowledge and expertise through scholarly activities and research. Their scope is narrower than public health programs by concentrating on specific areas of knowledge.
* While the public health domain focuses on the health of populations and utilizes various approaches to improve well-being, the IS domain focuses on designing, implementing, managing, and maintaining technology systems.
However, the public health and IS domains are not entirely distinct; health informatics, health information management, and biomedical informatics are examples of where the two domains intersect. These overlaps reflect the generalizability of IS concepts and competencies to different types of organizations (Topi, 2019) and the importance of information systems in public health organizations and programs. While differences in program types and domains add challenges to adapting the Sustainability Framework and PSAT (program sustainability assessment tool) to IS degree programs, they should not overshadow the potential benefits of developing an IS-specific program sustainability framework and assessment tool.
Truex et al. (2006) identify four important considerations for IS researchers who are contemplating theory adaptation: (1) the fit between the theory and the phenomenon of interest, (2) the theory's historical context, (3) its impact on research method choice, and (4) its contribution to cumulative theory. Table 2 summarizes how these apply to Schell et al.'s (2013) Sustainability Framework.
Although several theories (Systems Theory, Absorptive Capacity Theory, Resource Dependence Theory, and Stakeholder Theory) might be used to explain IS degree program durability and success, they have not been directly applied in this way. Hence, an adaptation of the Sustainability Framework has the potential to provide perspective that these theories have not.
An unaltered application of the Sustainability Framework to IS programs may have limited contributions to cumulative theorizing but adapting it to create a framework and assessment tool customized to IS programs has the potential significant contributions to our understanding of long-term IS degree program success and durability.
While stopping short of concluding that the Sustainability Framework is an acceptable starting point for developing a program sustainability framework for IS education, we thought it had sufficient potential to justify exploring how PSAT items might be adapted for IS education. Our exploration, results, and discussion provide insights into how an adaptation might unfold, what it might include, and how it might be used to better understand factors related to IS program durability and success.
4. THE PROGRAM SUSTAINABILITY ASSESSMENT TOOL (PSAT)
Schell et al.'s (2013) Sustainability Framework has dominated healthcare program sustainability research since the early 2010s, and its Program Sustainability Assessment Tool (PSAT) has been the most widely used instrument for measuring healthcare program sustainability. Luke et al. (2014) adapted and validated a previously developed program sustainability assessment instrument to measure the eight dimensions of the Sustainability Framework, resulting in the Program Sustainability Assessment Tool (PSAT). Hutchinson (2010) notes that the instrument on which the PSAT is based fared well in validation studies and had been used in previous research.
The PSAT's psychometric properties are superior to the best program sustainability assessment tools included in the Hutchinson (2010) compilation.
Luke et al.'s (2014) validation study was based on 386 PSAT responses representing 252 different programs. Confirmatory factor analysis indicated that the 40-item PSAT reliably measures the eight sustainability determinants. Internal consistency measures for the determinants are identified in Table 3. Bacon et al. (2022) performed a second validation study using 5,706 individual assessments of 2,892 programs collected between 2014 and 2019. The PSAT items loaded on the same sustainability determinants as the Luke et al.'s (2014) study and demonstrated strong internal consistency (see Table 3).
Because they were developed for program evaluation, both the Sustainability Framework and the PSAT have the potential to be leveraged to evaluate programs in disciplines or domains outside public health. Although validated using community public health program data, the PSAT may have utility as a program evaluation tool in other disciplines and program types. Therefore, its potential for adaptation to evaluate IS and other academic degree programs should not be overlooked.
The relatively general nature of the sustainability determinants in the Sustainability Framework (Figure 1; Table 1) enhances their applicability to programs outside healthcare. Accordingly, for our exploratory adaptation, we made minor wording changes to the PSAT items to better align them to IS programs. In the following sections, we refer to our mild PSAT adaptation as the PSAT-IS. The PSAT-IS items were embedded with other program-focused items in a survey instrument distributed to IS program administrators.
5. RESEARCH QUESTIONS, PSAT ADAPTATION AND METHODOLOGY
Although program sustainability has been a research topic for several decades, especially in healthcare, 1t has not been a direct focus of IS education research. There has, however, been some interest in the sustainability of information systems (e.g., Bauer et al., 2022). Lending et al.'s (2019) investigation aligns most closely with program sustainability thinking. However, the benefits of having an IS program sustainability framework and assessment tool have not yet been explored.
5.1 Research Questions
The primary purpose of our Sustainability Framework and PSAT adaptation was to explore the potential for developing a comprehensive sustainability framework for IS education. Specifically, the research questions were:
* RQ1. Can the Sustainability Framework (Schell et al., 2013) identify key sustainability determinants for IS degree programs?
* RQ2. Can an existing program sustainability measurement tool (the PSAT) be adapted to assess IS degree program sustainability?
КО! is partially addressed in Table 2. Further evidence of the framework"s potential to be a starting point for identifying sustainability determinants for an IS program sustainability framework 1s provided in Sections 6, 7, and 8. These sections also include supportive evidence for RQ2.
5.2 PSAT Adaptation and Methodology
Since our primary goal was to explore adaptation of the Sustainability Framework and PSAT for use in IS education research, we reworded the 40 PSA T items to improve alignment with IS programs. No new items or sustainability determinants were added. Hence, the PSAT-IS is a mild adaptation of the PSAT.
To increase response rates and reduce potential frustration levels for respondents, we followed Babakus and Mangold (1992) and chose a 5-point Likert scale response format for the PSAT-IS. The original PSAT uses a 7-point Likert format with a "Not Applicable (NA)" option for each of its 40 items. The NA option was not included in the PSAT-IS items. Investigators often report negligible or statistically nonsignificant differences in internal consistency and reliability measures between 5- and 7-point Likert scale versions of the same instrument (Nunnally, 1967; Preston & Colman, 2000). Because this was an exploratory study, we had no major concerns about using a different response format. PSAT-IS items are identified in Table 4.
The PSAT-IS was embedded in an online survey created and administered with Qualtrics software. The survey also asked respondents to provide information about their institution (e.g., public vs. private; research intensiveness [R1, R2, etc.]), program (e.g., college/division location; accreditation, number of students; enrollment trends; number of graduates; graduation trends; number of faculty, staffing trends), and program impacts (e.g., placement rates; placement quality; research production; research quality; grantsmanship).
Because program administrators were best positioned to provide the desired information, we solicited a single response per program from the individual serving as department chair/head or as program coordinator. With the help of a graduate assistant, we compiled a list of 383 IS program administrators in the U.S. using several sources, including AISNET.org and Campus Explorer. Given the exploratory nature of the study, we determined that expanding the sample size was unnecessary.
In June 2022, email solicitations were sent to the 383 program administrators. Each included a link to the survey, informed consent language, and an overview of the study's rationale and intent. Email recipients were asked to complete the survey within two weeks. Three rounds of email reminders were distributed to those who had not opened or completed the survey during that period.
6. RESULTS
Since we received no "undeliverable" notices for our emails, we believe we had a clean list of IS program administrators. One solicitation reached someone in a different college and program who identified the correct individual to contact. A second respondent reported technical issues when attempting to complete the survey. Another declined to respond because his dean had eliminated the program. We received 13 automatic out-of-office responses, one indicating retirement and another reporting a move to a different university. In total, we removed 16 programs from our list as unreachable, reducing our total to 367. The survey was opened/started by 47 of the reachable administrators, a 12.81% response rate. Seventeen opened the survey but did not provide a response for any item. Thirty respondents provided responses for the PSAT-IS items (an 8.17% response rate) but 2 did not complete the items requesting information about their institutions and programs. In the end, 7.63% of the reachable program administrators completed both parts of the survey.
The 30 respondents represented a wide range of universities including well-known land-grant and regional universities from all parts of the U.S. Responses were also received from IS program administrators at several well-known and respected private universities. Some of the participating universities have IS Ph.D. programs, many have both master's and bachelor's degree programs, and some only offer undergraduate IS degrees. Responses arrived from both large and small universities and from programs of varying undergraduate enrollment. Given the diversity and geographic distribution, we consider the sample reasonably representative of IS degree programs in the U.S.
6.1 PSAT-IS and PSAT Psychometric Similarities
Because we were adapting a previously validated program sustainability instrument (the PSAT) rather than developing a new one from scratch, we compared the item loadings and internal consistency of the PSAT-IS to those reported for the PSAT. All PSAT-IS items loaded onto the same eight sustainability determinants as the PSAT items. The average internal consistency of the sustainability determinants measured by the PSAT-IS was 0.86 and ranged from 0.77 to 0.92 (see Table 3). These are comparable to those reported for PSAT validation studies (Bacon et al., 2022; Luke et al., 2014) and are considered good to very good.
PSAT validation studies allowed multiple responses for a single program; our PSAT adaptation (the PSAT-IS) did not. Regardless, because the PSAT-IS items are minor revisions of PSAT items, the similar item loadings and internal consistency measures are not surprising. Differences between the Cronbach's alphas for PSAT-IS and PSAT may be attributable to different program type (IS degree program vs. public health), response format (5-point vs. 7-point Likert) and sample sizes. Overall, the PSAT-IS demonstrates psychometric properties comparable to those for PSAT, supporting RQ2 and, by extension, RQ1.
Per item means for the 40 PSAT-IS items are summarized in Table 4. These varied from a low of 2.3 for "The program attracts supplemental funding from external sources" to a high of 4.3 for "The program regularly assesses the quality of its degrees and other services" and "The program makes decisions about which program components should be continued and which should not."
As Calhoun et al. (2014) note, a PSAT sustainability determinant score is calculated by averaging the responses to the five items used to measure it and the total program sustainability score is calculated as the average of its sustainability determinant scores. Hence, for the Sustainability Framework and PSAT, each sustainability determinant is viewed as an equal contributor to overall sustainability. We recognize that similar assumptions for sustainability determinant and overall program sustainability calculations may not be feasible for an IS program sustainability assessment instrument, but because our adaptation was exploratory, we used the same approach to calculate the PSAT-IS sustainability determinant and total program sustainability scores. PSAT-IS sustainability determinant and total program sustainability score averages are summarized in Table 5.
The average PSAT-IS total score was 3.56. The Program Adaptation determinant had the highest average score (4.204) across programs, and Funding Stability (3.094) determinant had the lowest (3.094.) These determinants also had the highest and lowest average scores in the Bacon et al. (2022) PSAT validation study. In both our study and the Bacon et al.'s, the Program Evaluation determinant had the second highest average score, and Strategic Planning determinant had the second lowest average score. These similarities are supportive of a positive response to RQ2.
6.2 PSAT-IS Program Sustainability Score Variability
We were concerned that the PSAT-IS would yield a result with IS degree programs closely bunched around the total program sustainability mean with little variation. Such a result, we thought, might indicate that the PSAT could not be successfully adapted. Figure 2 summarizes the frequency distribution of PSAT-IS total program sustainability scores for the 28 programs that provided responses for both parts of our survey and shows that our fears were unfounded.
Total scores for the 28 programs ranged from 2.4 to 4.9 and the scores were roughly evenly distributed on either side of the mean and modal score (3.6). Twelve (12) programs had scores from 2.4 to 3.5, 11 had scores of 3.7 to 4.9, and four had the mean and modal score (3.6). The distribution of PSAT-IS total scores suggests that IS programs in our sample are not closely bunched around the total program sustainability average and that programs with higher total scores may differ from programs with lower total scores. The findings also suggest that comparing the characteristics of IS degree programs with higher and lower total and sustainability determinant scores may provide interesting insights.
Collectively, the results summarized in Table 3, Table 5, and Figure 2 suggest that the answer to RQ2 is "yes"; the PSAT demonstrates potential to be adapted as a program sustainability assessment tool for IS programs. By extension, these results also provide supportive evidence for a positive response to RQ1. Examining IS program characteristics through the lens of the Sustainability Framework's sustainability determinants provides additional support for RQ1.
7. SUSTAINABILITY DETERMINANTS AND PROGRAM CHARACTERISTICS
Bacon et al. (2022) emphasize the importance of investigating the links between sustainability capacity (measured by the PSAT), program characteristics, and program outcomes. They use PSAT total and sustainability determinant scores to compare various healthcare program characteristics, including program types, sizes, and longevities. Their findings suggest that programs with a specific focus (such as diabetes or obesity) typically have higher total and sustainability determinant scores than programs with more general and less-specific missions (such as community health programs that provide a wide-range of health services). Other researchers report that programs with greater staffing and longevity usually have higher total and sustainability determinant scores than those with smaller staffs or more recent implementation (start) dates (Bacon et al., 2022; Luke et al., 2014; Tabak et al., 2016). Such studies motivated us to examine how PSAT-IS scores may be related to program characteristics captured by the second part of our survey instrument.
As noted in Section 6, survey responses were received from administrators of IS programs at respected, well-known, and geographically dispersed land-grant universities, and from IS program administrators at regional and private universities in the U.S. These universities and IS programs varied in size and in the types and ranges of IS degrees they confer. Sixty-four percent of our respondents were overseers of IS programs in public universities and 36% were from private institutions. Seventy-five percent of the IS programs were housed in a business college/school; 14% were part of another college/school, and 11% reported that their department or program was in transition within their university's organization structure or that their undergraduate and graduate degree programs were in different colleges. Seventy-one percent of the programs are accredited, 29% are not. More than half (57%) offer concentrations or emphases (e.g., Analytics, Cybersecurity, ERP).
Other items in the second part of the survey asked program administrators to identify several five-year trends for their programs, including student enrollment, faculty size, and research output trends. Positive five-year trends in these areas are more likely to be indicators of program success than negative trends.
Table 6 identifies the reported trends and suggests that fewer IS programs in our sample added faculty members over the previous five years than remained the same size or became smaller. It also suggests that more programs in our sample experienced stable or declining undergraduate enrollments than increases. A small majority of programs reported stable research productivity; approximately one third of the programs experienced increases, and the rest decreases.
It is arguable whether faculty size, student enrollments, and research output are valid proxies for program success, but they are commonly used in university settings. The following summarizes some of the reasons why:
* Student Enrollment - Declining student enrollments can put a degree program at risk for elimination (Pavlov & Katsamakas, 2020). When fewer students enroll in a program, less tuition is generated, and the program is less capable of covering the costs of faculty salaries, resources, and facilities. Declining enrollments can also jeopardize a program's ability to meet accreditation requirements (Pavlov & Katsamakas, 2020). Welding (2024) notes that universities often prioritize programs with higher enrollments to maximize resource efficiency and sometimes view programs with consistently low enrollments as less viable. They may choose to merge or eliminate low enrollment programs to focus on those with higher demand or strategic importance.
* Faculty Size - Positive interactions between students and faculty significantly impact on the academic success of students (e.g., Kim & Sax, 2017), and larger faculty size can potentially make more resources and diverse expertise available to students to enhance the quality of their experiences and research opportunities. Universities with higher faculty-to-student ratios often provide better student support and engagement opportunities, but students at larger universities with many faculty members often report finding it challenging to form close mentoring relationships (Raposa et al., 2021); programs with smaller numbers of faculty members sometimes provide more personalized attention and beneficial mentoring for their students. Hence, while faculty size can influence degree program success, the quality of student-faculty interactions and support systems available to students are also important. Programs that attract increasing numbers of students are more likely to hire additional faculty members to accommodate them, so it is not unreasonable to consider programs with growing faculties as successful programs.
* Research Productivity - Faculty research productivity can impact the reputation and quality of an academic program. This is typically measured by publications, grants, and other scholarly activities. At research universities, faculty retention, promotion, and tenure are built upon a rewards system that emphasizes research productivity (Bergeron et al., 2014). Also, faculty members who are active researchers are positioned to provide students with cutting-edge knowledge and involve them in research projects; this can improve student engagement, learning outcomes, and postgraduation success (Smith, 2020). Research productive faculty may also be well-positioned to offer better mentorship and networking opportunities for students, especially for graduate students, and to help them publish their work or advance their careers. Higher productivity can also enhance the reputation of a university and its programs which can make it easier to attract students and funding (Hesli & Lee, 2011). So, increasing research productivity may contribute to academic program success.
Luke et al. (2014) observed that a valid program sustainability assessment instrument should produce scores that correlate with program success measures. Table 7 suggests that PSAT-IS total and sustainability determinant scores correlate with reported trends for faculty size and student enrollment, but not research output. Total scores and scores on seven of the eight sustainability determinants are significantly correlated with the faculty size trend measure. Total scores and scores for four of the sustainability determinants are significantly correlated with the enrollment trend measure.
The findings summarized in Table 7 suggest that some sustainability determinants from the Sustainability Framework may be candidates for inclusion in an IS program sustainability framework, and they provide supportive evidence for positive responses to both RQ2 and RQ1.
Figure 3 illustrates how the Sustainability Framework"s sustainability determinants can be used to examine program differences. Here, PSAT-IS sustainability determinant scores are used to compare IS programs with increasing or decreasing faculty size, student enrollments, and research productivity. Figure 3 also illustrates how PSAT-IS sustainability determinant scores can be used to compare other program characteristics including university type (public vs. private), accreditation status (accredited vs. not accredited), and degree program concentrations (offered vs. not offered).
Table 8 (a, b, c) demonstrates how further statistical probing of the sustainability determinant patterns illustrated in Figure 3 can yield additional insights into program differences. Here, t-tests are used to compare the survey responses of dichotomous groups, such as accredited vs. non-accredited programs or programs with vs. without concentrations. Programs with increasing student enrollments and IS programs With decreasing enrollments are in different groups; programs with stable enrollments were excluded from the comparison. Because small-size groups were being compared for each program characteristic, t-tests were used instead of F-tests to compare sustainability determinant means; t-tests are more appropriate than F-tests for smaller sample sizes and when the creation of dichotomous groups results in small-size groups. Since our PSAT adaptation was exploratory, we considered more powerful statistical assessments unnecessary.
Numerous statistically significant t-test results were found for the dichotomous groups depicted in Figure 3; these are listed in Table 8 (a, b, c). Statistically significant results for the Environmental Support determinant were observed for five of the program characteristics; this suggests that it may be a candidate for inclusion as a determinant in an IS program sustainability framework. A similar conclusion might be reached for the Program Evaluation determinant. Other potential determinants for an IS program sustainability framework include Funding Stability, Communications, and Strategic Planning.
Table 8 (a, b, с) suggests that several sustainability determinants from the Sustainability Framework may be related to IS program accreditation. This may be consistent with Lending et al.'s (2019) identification of accreditation as a contributor to the quality and success of JMU's CIS program. Our findings suggest that accreditation may be a driver or outcome of environmental support, funding stability, and overall program sustainability.
The pattern of results in Table 8 (a, b, с) aligns with Table 7's correlations but illustrates how comparing dichotomous groups for program characteristics provides further insights into IS program differences. Overall, they provide evidence of a positive response to both RQ2 and RQ1.
From Figure 3 and Tables 7 and 8 (a, b, с), Program Adaptation and Organizational Capacity appears to be the least applicable sustainability determinants for IS programs, but this may not be the case. Program Adaptation has one of the highest means for each of the dichotomous groups in Figure 3. It appears to be important to all programs in our sample; similar patterns for Program Adaptation have been reported for healthcare programs (Bacon et al., 2022; Luke et al., 2014). It may also be unreasonable to dismiss Organizational Capacity as a sustainability determinant in an IS program sustainability framework because programs with decreasing faculty sizes and resources (less capacity) may be challenged to remain viable.
The results summarized in Figure 3 and Table 8 (a, b, с) suggest that programs with declining undergraduate ·nrollments differ from programs with increasing enrollments on most sustainability determinants, especially Environmental Support and Funding Stability. This suggests that IS programs With decreasing student numbers have less support from stakeholders and funding sources than programs with increasing student numbers. This may put them at risk for elimination or merger with other programs. The results in Figure 3 and Table 8 (a, b, с) also suggest that for most or all sustainability determinants, IS degree programs with increasing faculty sizes have higher scores than programs with decreasing faculty sizes. Staff size differences have also been found to contribute to healthcare program sustainability (Bacon et al., 2022; Tabak et al., 2016). In Figure 3 and Table 8 (a, b, с), increasing research productivity appears to be associated with higher levels of Environmental Support but not with other sustainability determinants. This suggests that environmental support is conducive to or results from higher research output; this may also align with Lending et al.'s (2019) contention that pedagogical research can contribute to long-term program quality.
8. LIMITATIONS AND DISCUSSION
The ability to reach definitive conclusions about adapting the Sustainability Framework and PSAT for IS education 15 constrained by several research limitations which are listed below.
8.1 Sample Size
With only 30 IS program overseers providing responses to the PSAT-IS items in our survey and only 28 providing answers to the items focusing on program characteristics, there is barely enough data for calculating basic psychometric properties for the PSAT-IS, such as Cronbach alphas. Using more sophisticated data analytic procedures (such as SEM and CFA) was considered unnecessary for our limited data set.
8.2 Non-Random Sample
Our sampling method limits the generalizability of our findings. Because we only solicited responses from program administrators (e.g. department heads/chairs, program coordinators), our sample is not random, and it is impossible to dismiss response bias as a limitation. Some program administrators may have provided responses in ways to create a favorable impression of their programs or leadership and since our survey lacked response bias checks, the veracity of the data we analyzed may be uncertain. It is possible that administrators of successful IS programs were more inclined to complete the survey than administrators of struggling programs, but the variability observed in Figure 2 for total program sustainability suggests that our sample includes responses from both more and less successful programs.
8.3 Solicitation Method
Limiting our survey solicitation to one response per program and to program administrators may have been overly constraining and unwise. If we had also solicited survey responses from tenured faculty at the programs we tried to contact, our sample size and the number of programs providing data may have been larger, and the potential for response bias may have been reduced. Multiple responses per program were allowed in the Luke et al. (2014) and Bacon et al. (2022) PSAT validation studies.
8.4 Ability to Adequately Address RQ2
Our ability to definitively answer our research questions is limited. While our findings provide supportive evidence that the PSAT has the potential to be adapted for application to IS programs (RQ2), the items in our adaptation (the PSAT-IS) are mildly reworded versions of the original PSAT items rather than extensively reworded or new items that closely align with the unique characteristics and circumstances of IS degree programs. Also, since no modifications or additions were made to the Sustainability Framework's sustainability determinants, no significant steps were taken to customize the framework for IS programs. So, our best answer to RQ2 is that our findings suggest that PSAT-IS may be a starting point for the development of a program sustainability assessment tool for IS education and that some of the sustainability determinants measured by the PSAT-IS may be candidates for inclusion in an IS program sustainability framework.
8.5 Ability to Fully Address RQ1
While our findings for RQ2 provide supportive evidence for a positive answer for RQ1, RQ1 requires more than empirical data to be adequately addressed. RQ1 essentially asks whether a theory/framework from another field of study can be adapted and used in IS education research. Although the Sustainability Framework and PSAT were developed for program evaluation (a transdisciplinary practice), differences between domains (public health vs. IS) and program types (public health vs. academic degree programs) add complexity and challenges to the adaptation process. These challenges also bring theory adaptation considerations identified by Truex et al. (2006) into play. Table 2 summarizes the application of these considerations to a potential adaptation of the Sustainability Framework to facilitate the development of an IS program sustainability framework, and our findings suggest that several of its sustainability determinants may be candidates for inclusion in a customized framework for IS. The findings summarized in Figure 3 and Tables 8 (a, b, с) and 9 provide evidence that many of the Sustainability Framework's sustainability determinants are related to IS program characteristics and may be candidates for inclusion in an IS program sustainability framework. However, our answer to RQ1 might be different or more conclusive if we had applied the sustainability determinants to more or all the program characteristics captured by our survey responses. Although our results demonstrate the potential of a program sustainability framework to provide a rich and different perspective of IS programs, considerable work is needed fully and successfully adapt the Sustainability Framework.
Despite these limitations, our exploratory adaptation of the Sustainability Framework and PSAT has increased our desire to see an IS program sustainability framework and assessment tool developed. We think these could bring new perspectives and insights into how program attributes contribute to longterm success and the ability of programs to adapt to a changing IS landscape.
9. FUTURE DIRECTIONS
The development of an IS program sustainability framework and assessment tool that fits the unique characteristics of IS programs and their environments will require considerable work. Examples of this work are identification of sustainability determinants, customized assessment tool items, framework and assessment tool validation, and assessment tool delivery. Following is a more detailed discussion of each area of work.
9.1 Identification of Sustainability Determinants
The Sustainability Framework may be a useful starting point for developing an IS program sustainability framework and assessment tool. Our findings suggest that several of its sustainability determinants may be candidates for inclusion in an IS program sustainability framework, but considerable adjustments will be needed to align them with IS program realities. Each should be evaluated for inclusion or omission from an IS program sustainability framework; if retained, the degree of modification needed must be identified. Sustainability determinants from other program sustainability frameworks (e.g., Buck, 2015; Office of Public Affairs, 2017) should be evaluated for potential inclusion in an IS program sustainability framework. While some largely overlap those in the Sustainability Framework, others are sufficiently distinctive for separate consideration. If additional sustainability determinants are identified, they must be modified for application to IS programs and appropriate assessment tool items will have to be identified.
IS education research should also be leveraged to identify potential sustainability determinants for an IS program sustainability framework. If some are identified, appropriate assessment tool items will be needed to measure them.
9.2 Customized Assessment Tool Items
If sustainability determinants in the Sustainability Framework are designated for inclusion in an IS program sustainability framework, assessment tool items to measure them must be identified and validated. Some may be modified versions of PSAT or PSAT-IS items, but new items may also be needed. For example, if the Environmental Support determinant is retained, items used to measure it should account for the importance of support from IS program stakeholders (students, faculty members, college/university administrators, student advisors, alumni, employers, recruiters, accrediting organizations, etc.).
If the Communication determinant is retained, items used to measure it should focus on effective communication and engagement with program stakeholders and sources of new students. They should also measure the effectiveness of how the program markets/advertises its degrees and career opportunities.
If the Program Evaluation sustainability determinant is retained, the work done by Lending et al. (2019) suggests that there should be items that focus on effectiveness of assessment processes, the achievement of student learning outcomes, and accreditation. Topi (2019) implies that Program Evaluation items should also focus on the effectiveness of processes used to develop IS-specific competencies.
If the Organization Capacity determinant is retained, the assessment tool should include items that address whether the program has the labs, IT infrastructure, and faculty expertise required to deliver current and planned courses.
9.3 Framework and Assessment Tool Validation
If a program sustainability framework is developed for IS education, a psychometrically sound and valid assessment tool will be needed. The framework should also be validated. Traditional and powerful statistical analyses should be used for both.
9.4 Assessment Tool Delivery
If an IS program sustainability framework and assessment tool is developed, it may be expedient to consider having a hosted website where program sustainability assessments can be aggregated and summarized. The PSAT has a website (https://www.sustaintool.org/psat/) to enable benchmarking and determining how an individual program stacks up against other programs. A similar website for an IS program sustainability assessment tool might be hosted by a professional organization (e.g., AIS), a special interest group that focuses on IS education (e.g., AIS SIGED), or an IS program accrediting organization such as ABET.
An IS program sustainability framework and assessment tool may be valuable for IS programs that are struggling with student enrollments or are at risk of elimination or being merged with other programs. However, it could also be leveraged by more successful programs to identify opportunities to further improve. An IS program sustainability framework and assessment tool could also be used by IS education researchers to address a wide range of program-level questions such as those identified in Table 9.
A program sustainability framework and assessment tool could also be a vehicle for considering how mainstream IS theories apply to IS degree programs. For example, like the Sustainability Framework, Resource Dependence Theory (RDT) addresses the ongoing need of organizations to acquire resources from their environments (Hillman et al., 2009; Pfeffer & Salancik, 2003). Absorptive Capacity Theory (Roberts et al., 2012) may also be a valuable lens for considering IS degree program viability. It has already been applied to other sustainability initiatives (Cooper & Molla, 2017). The interplay of program sustainability perspectives with these and other mainstream IS theories may foster deeper understanding of IS program durability and lead to new research.
Individual programs could use an IS program sustainability framework and assessment tool for self-examination, comparing themselves to other programs, and developing evidence-based action plans. Table 10 identifies examples of how such tools might be used by individual programs.
Like the Sustainability Framework and PSAT, a sustainability framework and assessment tool for IS is most likely to be used for program evaluation. Individual programs will be able to assess their standing on the framework's sustainability determinants and use their assessment results to develop improvement plans. However, an IS-specific framework and assessment tool may have limited utility in other disciplines or program types due to its targeted customization for IS programs.
Regardless, an IS-specific sustainability framework and assessment tool may have utility as models for evaluating other academic degree programs. Many universities periodically evaluate their academic degree programs and consider program characteristics identified as sustainability determinants in the Sustainability Framework. Hence, it is not unreasonable to consider whether a framework and assessment tool developed for IS degree programs could be adapted for applications to evaluate other academic programs. An IS-specific sustainability framework and assessment tool may also have utility in ABET IS program evaluation.
10. CONCLUSIONS
We think that there is value in considering IS program durability from a program sustainability perspective. After considering the issues associated with adapting a theory developed outside of IS for use in IS and conducting an exploratory adaptation, we think that the Sustainability Framework could serve as a starting point for the development of an IS program sustainability framework. The minor changes we made to PSAT items to align them with IS programs produced insights into the relationships between sustainability determinants, program characteristics, and program success indicators. They also demonstrated the potential for including sustainability determinants from the Sustainability Framework in an IS program sustainability framework.
Considerable work is needed to develop an IS program sustainability framework and assessment tool that captures the realities and challenges of IS programs, but we think this work is warranted because it could be used by IS education researchers and individual programs in a variety of ways to help IS programs successfully navigate a changing discipline landscape.
11. REFERENCES
Antonucci, Y. L., Corbitt, G., Stewart, G., & Harris, А. (2004). Enterprise Systems Education: Where Are We? Where Are We Going? Journal of Information Systems Education, 15(3), 227-234.
Babakus, E., & Mangold, G. (1992). Adapting the SERVQUAL Scale to Hospital Services: An Empirical Investigation. Health Service Research, 26, 767-780.
Babb, J., Waguespack, L., & Abdullat, A. (2019). Invited Paper: Subsumption of Information Systems Education Towards a Discipline of Design. Journal of Information Systems Education, 30(4), 311-320.
Bacon, C., Malone, S., Prewitt, K., Hackett, R., Hastings, M., Dexter, S., & Luke, D. А. (2022). Assessing the Sustainability Capacity of Evidence-Based Programs in Community and Health Settings. Frontiers in Health Services, 2. https://doi.org/10.3389/frhs.2022.1004167
Bauer, E., Diesterhóft, T. O., Greve, M., & Kolbe, L. (2022) Towards Longevity of Public Health Information Systems - The Role of Stakeholder Commitment. PACIS 2022 Proceedings, 340. https://aisel.aisnet.org/pacis2022/340
Bell, C., Mills, R., & Fadel, К. (2013). An Analysis of Undergraduate Information Systems Curricula: Adoption of the IS 2010 Curriculum Guidelines. Communications of the Association for Information Systems, 32, 74-94. https://doi.org/10.17705/1CAIS.03202
Bergeron, D. M., Ostroff, C., Schroeder, T. D., & Block, C. J. (2014). The Dual Effects of Organizational Citizen Behavior: Relationships to Research Productivity and Career Outcomes in Academe. Human Performance, 27(2), 99-128. https://doi.org/10.1080/08959285.2014.882925
Boehler, J. A., Larson, B., & Shehane, В. Е. (2020). Evaluation of Information Systems Curricula. Journal of Information Systems Education, 31(3), 232-243.
Buck, K. (2105). The Path to Program Sustainability. Conservation Impact and Nonprofit Impact, 1-10. http://conservationimpactnonprofitimpact.com/PathtoProgramSustainability.pdf
Burns, T., Gao, Y., Vengerov, A., & Klein, S. (2014). Investigating a 21st Century Paradox: As the Demand for Technology Jobs Increases Why Are Fewer Students Majoring in Information Systems. Information Systems Education Journal, 12(4), 4-16.
Calhoun, A., Mainor, A., Moreland-Russell, S., Maier, К. С., Brossart, L., & Luke, D. A. (2014). Using the Program Sustainability Assessment Tool to Assess and Plan for Sustainability. Preventing chronic disease, 11, 130185. https: //doi.org/10.5888/pecd11.130185
Centers for Disease Control and Prevention. (2024). CDC Program Evaluation Framework, 2024. Morbidity and Mortality Weekly Report, 73(6), 1-37. U.S. Department of Health and Human Services. http://dx.doi.org/10.15585/mmwr.rr7306a1
Christensen, С. M., & Raynor, М. Е. (2003). The Innovators Solution: Creating and Sustaining Successful Growth. Boston, MA: Harvard Business School Press.
Cooper, V., & Molla, A. (2017). Information Systems Absorptive Capacity for Environmentally Driven ISEnabled Transformation. Information. Systems Journal, 27(4), 367-553. https://doi.org/10.1111/is;.12109
Coursera. (2024). Business Degree Salary: 2024 Guide. https://www.coursera.org/articles/business-degree-salary
Elrod, С. C., Stanley, $. M., Cudney, E. A., Hilgers, М. G., € Graham, C. (2022). Management Information Systems Education: A Systematic Review. Journal of Information Systems Education, 33(4), 357-370.
Evashwick, C., & Ory, M. (2003). Organizational Characteristics of Successful Innovative Health Care Programs Sustained Over Time. Family Community Health, 26(3), 177-193. https://doi.org/10.1097/00003727200307000-00003
Fichman, R., Dos Santos, B., & Zheng, Z. (2014). Digital Innovation as a Fundamental and Powerful Concept in the Information Systems Curriculum. MIS Quarterly, 38(2), 329-353. https://doi.org/10.25300/misq/2014/38.2.01
Freeman, L. A. € Taylor, N. (2019). Invited Paper: The Changing Landscape of IS Education: An Introduction to the Special Issue. Journal of Information Systems Education, 30(4), 212-216.
Hesli, V. L., & Lee, J. M. (2011). Faculty Research Productivity: Why Do Some of Our Colleagues Publish More Than Others? Political Science & Politics, 44(2), 393-408. https://doi.org/10.1017/S1049096511000242
Hillman, A. J., Withers, M. C., & Collins, В. J. (2009). Resource Dependence Theory: A Review. Journal of Management, 35(6), 1404-1427. https://doi.org/10.1177/0149206309343469
Hol, A., Richardson, J., Hamilton, M., & McGovern, J. (2024). Strengthening Undergraduate Information Systems Education in an Increasingly Complex Computing Disciplines Landscape. Communications of the Association for Information Systems, 54, 50-65. https://doi.org/10.17705/1CAIS.05403
Hutchinson, K. (2010). Literature Review of Program Sustainability Assessment Tools. Vancouver (BC): The Capture Project, Simon Fraser University.
Katz, D., & Kahn, R. L. (1966). The Social Psychology of Organizations. New York: John Wiley & Sons.
Kim, Y. K., € Sax, L. J. (2017). The Impact of College Students' Interactions With Faculty: A Review of General and Conditional Effects. In Paulsen, M. (Ed.), Higher Education: Handbook of Theory and Research, 32, Springer, Cham. https://doi.org/10.1007/978-3-319-4898343
Lending, D., Mitri, M., & Dillon, T. W. (2019). Invited Paper: Ingredients of a High-Quality Information Systems Program in a Changing IS Landscape. Journal of Information Systems Education, 30(4), 266-286.
Luke, D. A, Calhoun, A., Robichaux, С. B., Elliott, М. B., & Moreland-Russell, S. (2014). The Program Sustainability Assessment Tool: A New Instrument for Public Health Programs. Preventing Chronic Disease, 11, 130-184. https://doi.org/10.5888/pcd11.130184
Mandviwalla, M., Dinger, M., & Anderson, В. (2023). Information Systems Job Index 2022. IBIT, Temple University.
Nunnally, J. C. (1967). Psychometric Theory. New York: McGraw-Hill.
Office of Public Affairs. (2017) The OPA Framework for Program Sustainability. https://opa.hhs.gov/sites/default/files/2021-02/frameworkfor-program-sustainability.pdf
Pavlov, О. V., & Katsamakas, E. (2020). Will Colleges Survive the Storm of Declining Enrollments? A Computational Model. PLOS ONE, 15(8), e0236872. https://doi.org/ 10.137 1/jo0urnal. pone.0236872
Pfeffer, J., & Salancik, G. R. (2003). The External Control of Organizations: A Resource Dependence Perspective. Stanford University Press.
Popin, M. (2023). Systems Thinking and Sustainability [Blog]. The Futurice Blog. https: //www.futurice.com/blog/systems-thinking-andsustainability
Preston, C. C., & Colman, A. M. (2000). Optimal Number of Response Categories in Rating Scales: Reliability, Validity, Discriminating Power, and Respondent Preferences. Acta Psychologica, 104(1), 1-15. https://doi.org/10.1016/s00016918(99)00050-5
Raposa, E. B., Hagler, M., Liu, D., & Rhodes, J. E. (2021). Predictors of Close Faculty-Student Relationships and Mentorship in Higher Education: Findings from the GallupPurdue Index. Annals of the New York Academy of Sciences, 1483(1), 36-49. https://doi.org/10.1111/nyas.14342
Richardson, J., Burstein, F., Hol, A., Clarke, К. J, & McGovern, J. (2018). Australian Undergraduate Information Systems Curricula: A Comparative Study. In 27th International Conference on Information Systems Development (152018). https://doi.org/10.1007/978-3030-22993-1 2
Roberts, N., Galluch, P. S., Dinger, M., & Gruver У. (2012). Absorptive Capacity and Information Systems Research: Review, Synthesis, and Directions for Future Research. MIS Quarterly, 36(2), 625-648. https://doi.org/10.2307/41703470
Scheirer, M. A. (2005). Is Sustainability Possible? A Review and Commentary on Empirical Studies of Program Sustainability. American Journal of Evaluation, 26(3), 320347. https://doi.org/10.1177/1098214005278752
Scheirer, M. A., & Dearing, J. W. (2011) An Agenda for Research on the Sustainability of Public Health Programs. American Journal of Public Health, 101(11), 2059-2067. https://doi.org/10.2105/ajph.2011.300193
Schell, S., Luke, D., Schooley, M., Elliott, M., Herbers, S., Mueller, N., & Bunger, A. (2013). Public Health Program Capacity for Sustainability: A New Framework. Implementation Science, 8(15), 1-9. https://doi.org/10.1186/1748-5908-8-15
Shediac-Rizkallah, M. C., & Bone L. R. (1998). Planning for the Sustainability of Community-Based Health Programs: Conceptual Frameworks and Future Directions for Research, Practice and Policy. Health Fducation Research, 13(1), 87-108. https://doi.org/10.1093/her/13.1.87
Shelton, R. C., Cooper, B. R., & Stirman, S. W. (2018). The Sustainability of Evidence-Based Interventions and Practices in Public Health and Health Care. Annual Review of Public Health, 39, 55-76. https://doi.org/10.1146/annurev-publhealth-040617014731
Smith, E. (2020). The Intersection of Faculty Success and Student Success. International Journal of Multidisciplinary Perspectives in Higher Education, 5(1), 180-183. https://doi.org/10.32674/jimphe.v511.2533
Tabak, R. G., Duggan, K., Smith, C., Aisaka, K., MorelandRussell, S., & Brownson, R. C. (2016). Assessing Capacity for Sustainability of Effective Programs and Policies in Local Health Departments. Journal of Public Health Management and Practice, 22(2), 129-137. https://doi.org/10.1097/phh.0000000000000254
Topi, H. (2019). Invited Paper: Reflections on the Current State and Future of Information Systems Education. Journal of Information Systems Education, 30(1), 1-9.
Truex, D., Holmstrom, J., & Keil, M. (2006). Theorizing in Information Systems Research: A Reflexive Analysis of the Adaptation of Theory in Information Systems Research. Journal of the Association for Information Systems, 7(12). https://doi.org/10.17705/1jais.00109
Welding, L. (2024). U.S. College Enrollment Decline: Facts and Figures. BestColleges.com. https://www.bestcolleges.com/research/collegeenrollment-decline/
Yarbrough, D. B., Shula, L. M., Hopson, Е. K., & Caruthers, Е. A. (2010). The Program Evaluation Standards: A guide for Evaluators and Evaluation Users (3rd ed.). Thousand Oaks, CA: Corwin Press.
Zweben, S. H., Tims, J. L., Tucker, C., & Timanovsky, Y. (2021). ACM-NDC Study of Non-Doctoral-Granting Departments of Computing. ACM Inroads, 12(4), 30-44. https://doi.org/10.1145/3485245
Copyright EDSIG 2025