Content area
Artificial Intelligence (AI) is significantly reshaping work settings, influencing the context, conditions, and content of various professional roles. It becomes crucial to assess AI’s effect on academic work. This study explores AI’s application within teaching and research tasks in academia. Specifically, it pursues two Objectives (1) to identify and describe both current and prospective AI systems in higher education, and (2) to characterize the opportunities and risks of integrating AI into academic environments. Interviews were conducted with 28 participants from Portugal, the Netherlands, and the United States. The questions addressed AI’s influence on Ethical Principles and Decent Work Dimensions. Results were analyzed considering the Socio-Technical Systems Approach. Interviews were coded, analyzed for sentiment, and clustered into seven participant profiles based on coding similarities: “Optimists,” “Moderates,” “Dreamers,” “Cautious Skeptics,” “Expansionists,” “Knowledgeable,” and “Strategists.” Findings emphasize the importance of aligning technology and human needs to achieve successful AI integration. They also point to the value of well-defined guidelines, fair funding, and continuous professional development. By illustrating the spectrum of attitudes and readiness levels among academic stakeholders, this study offers key insights for policymakers, administrators, and educators seeking to embrace AI while preserving Ethical Principles and Decent Work standards.
Introduction
Artificial Intelligence (AI) is the science of building rational systems—whether software-based or integrated in hardware—that can perceive their surroundings, reason about data, and autonomously or semi-autonomously act to achieve goals, often adapting their behavior based on feedback from the environment (High-Level Expert Group on Artificial Intelligence, 2019). AI already influences multiple facets of society (Bhardwaj et al., 2020; Maity, 2019). Technology was identified as one of the main drivers of change in work environments in the next three years (World Economic Forum, 2023). This emphasizes the urgency of investigating AI’s broader implications within academic contexts, which this paper now turns to explore.
In this study, we seek to acquire contextual knowledge of the current and prospective AI applications used in academia. Furthermore, we aim to deeply characterize the impact of AI—based on the perspectives of three groups —on core academic activities. To do this, we will be considering Ethical Principles as defined by Khan et al. (2022) and the Decent Work (DW) Dimensions as presented by dos Santos (2019). Participants include AI experts (AIP), AI experts who are professors (AIEP), and professors who are not AI experts (NEP) Finally, we will be interpreting our findings through the lens of the Socio-Technical Systems (STS) Approach (Sony & Naik, 2020) to provide a comprehensive understanding of how AI shapes, and will continue to shape, the evolution of teaching and research in academia. Having established the study’s objectives, we now examine how AI is specifically influencing future directions in higher education.
AI and the future of higher education
AI is currently being explored for academic work, with some authors (Berens et al., 2023; Calvert et al., 2020; Thomas et al., 2023; Yu & Nazir, 2021) identifying positive outcomes such as personalized education, increased accessibility of the teaching-learning process, enhanced data analysis, and a quicker peer-review process (Pisica et al., 2023). Nevertheless, multiple scholars have highlighted risks such as decreased emotional elements, isolation, and copyright concerns (Barrios Tao et al., 2019; Bearman et al., 2022; Bjola, 2021; Lund et al., 2023; Toms & Whitworth, 2022; Zawacki-Richter, 2019).
While numerous studies address these issues in general educational settings, there is a growing need to consider how they manifest in university-level teaching and research (Mumtaz et al., 2025). Pisica et al. (2023) already observed that academics’ concerns on AI encompass the teaching-learning process, research, skills and competencies, inclusion, costs and efficiency, socio-psychological effects, data security aspects, and staff redundancy. Additionally, recent studies highlight that integrating AI into educational research demands careful attention to transparency, accountability, and authorship practices (Balta, 2023). Hence, the importance of ethical and human considerations, which we now examine in more detail.
The ethical realm in the application of AI
Ethics pertains to the exploration of moral principles, where guiding norms, policies, and regulations rooted in best practices are conceived (Siau & Wang, 2020). AI will impact social systems - many of which are fragile and incredibly complex — consequently introducing ethical challenges (Pastor-Escuredo, 2021). Given these complexities, ethical considerations must be integrated into the earliest stages of AI design and implementation within academia.
In line with the findings of Khan et al. (2022), four core ethical principles emerge as critical for the responsible design and deployment of AI systems: transparency, privacy, accountability, and fairness. Transparency involves making AI decisions and processes understandable, fostering stakeholder trust, and enabling oversight. Privacy concerns the safeguarding of personal data, ensuring that individuals remain protected from unwarranted surveillance or misuse of their information. Accountability requires that clear lines of responsibility exist for AI-driven decisions, so adverse outcomes can be rectified, and developers, organizations, or users can be held answerable for system errors or biases. Fairness highlights the importance of just treatment across diverse groups, mitigating potential discrimination within algorithmic processes.
Beyond these principles, recent research highlights the socio-technical complexity of AI deployment in academia, pointing to the importance of collaborative governance, rigorous stakeholder engagement and clear frameworks to guide the responsible use of generative AI in academic work (Bansal & Heath, 2023; Eacersall et al., 2024; Pastor-Escuredo, 2021). This deployment—while offering several gains-also heightens concerns and can lead to resistance among teachers, researchers and administrators due to risks of copyright, reduced originality, and other ethical challenges (Khatri and Karki’s, 2023; Verboom & Rebelo, 2025). In devising strategic guidelines for AI in academia, studies highlight the importance of aligning institutional autonomy with stakeholder engagement (Li et al., 2024). Chinta et al. (2024) emphasize that effective AI integration requires mapping the technology’s uses in higher education, gathering in-depth insights from both experts and non-expert faculty to prevent labor inequities. This aligns with Radanliev et al. (2024), who argue that the development of responsible AI is best achieved through an iterative framework that combines technological safeguards with institutional checks. Such an integrated approach to ethical AI connects to broader labor concerns, leading us to consider how DW Dimensions frame our understanding of AI’s impact.
Decent work dimensions in the context of AI
The DW framework translates human rights in labor and expresses people’s aspirations for working life (dos Santos, 2019). Following the definition by the International Labour Organization, DW includes seven psychological dimensions concerning work content, conditions, and context (Ferraro et al., 2018). As one of the 2030 Sustainable Development Goals, DW is a legitimate lens to research in the WOP-P field. In this context, DW allows us to assess how AI might impact dignified working conditions.
The DW dimensions, as presented by dos Santos (2019), each highlight critical aspects for promoting dignified and fair employment conditions. Fundamental Principles and Values at Work center on justice, equality, respect, and non-discrimination in labor relations, ensuring the protection of fundamental rights and human dignity. Adequate Working Time and Workload highlight the importance of balanced scheduling and manageable responsibilities, recognizing that excessive or poorly structured tasks jeopardize workers’ well-being. Fulfilling and Productive Work emphasizes the extent to which tasks are motivating, meaningful, and conducive to personal and professional growth, thereby driving employee satisfaction and enhancing organizational outcomes. Meaningful Remuneration for the Exercise of Citizenship highlights fair compensation that enables workers to participate actively in society and meet their personal and familial needs. Social Protection involves the provision of adequate safety nets—such as health insurance, unemployment benefits, and pension schemes—to shield workers and their families from economic risks. Opportunities for career advancement, entrepreneurship, and alternative employment expand individuals’ freedom of choice and foster professional growth. Lastly, Health and Safety reinforce the importance of secure, hazard-free working environments and promotion of physical and psychological well-being. By outlining these pillars, we see how each dimension can be directly impacted—either positively or negatively—by AI-driven changes in the workplace.
Although much of the literature on DW predates today’s surge in AI, scholars are increasingly examining how AI and digitalization influence DW (Ghosh & Sadeghian, 2024). On one side, advanced digital tools can improve health and safety, reduce excessive workloads, and democratize information-sharing—potentially enhancing job security and professional development opportunities. On the other hand, AI can originate new forms of precarious labor, excessive surveillance, and skill obsolescence, undermining employees’ sense of dignity (Ghosh & Sadeghian, 2024). Similarly, the effect of AI on “Fulfilling and Productive Work” can be double-edged: while AI may automate repetitive tasks and allow employees to focus on stimulating assignments, it can also erode specific roles or, if poorly designed, strip employees of independence and exacerbate inequalities. Given AI’s transformative power AI, researchers emphasize the need to further study its influence and conceive good integration practices (Braganza et al., 2021; Deshpande et al., 2021; Özkiziltan & Hassel, 2021). This emphasis on integration strategies directs attention to the STS Approach, which evaluates AI through social and technical lenses. Moreover, they align closely with the United Nations’ Sustainable Development Goal 8 (ODS 8), which promotes decent work and inclusive economic growth within the framework of the 2030 Agenda (United Nations, 2015).
Socio-technical systems theory and AI applications
The STS framework examines the interplay between social systems and technology. The theory emphasizes joint optimization, meaning that productivity and well-being can only be maximized when the design of the technical subsystem (e.g., automated workflows and digital infrastructures) is balanced with attention to the social subsystem (e.g., skills, job design, culture, and teamwork). Recent contributions emphasize six interrelated STS dimensions—people, infrastructure, technology, culture, processes, and goals—and highlight boundary management as organizations continuously adapt to external forces such as competition, regulation, and economic volatility (Sony & Naik, 2020). By prioritizing the synergy between people and technology, STS emphasizes why technical solutions might overlook crucial human factors in AI adoption.
Currently, the power of STS is particularly visible in the rapid adoption of AI across industries (Makarius et al., 2020). AI and related digitalization efforts often inspire restructured workflows or fully reimagined business models; however, STS principles caution that these transformations demand careful alignment with social elements such as worker involvement, trust, upskilling, and ethical considerations. For example, self-managing or semi-autonomous teams—an enduring STS concept—have been proposed as a robust way to integrate AI while preserving human oversight, collaboration, and creativity (Appelbaum, 1997; Makarius et al., 2020). In this sense, the technology must not be an isolated “black box” but rather an asset co-evolving with social structures such that employees can apply their judgment, handle contextual complexities, and adapt the technology to real-world variances. Thus, any effective AI integration strategy must account for these social intricacies, aligning with ethical and DW considerations to achieve sustainable outcomes.
The STS approach can foster trust between people and AI, a factor that will be crucial for adoption (Herrmann & Pfeiffer, 2022; Andras et al., 2018). In this context, the effective application of STS requires the active involvement of stakeholders (Mumford, 2006). Sustainable benefits come from a holistic approach that bridges technical infrastructures and social practices, mutual learning processes, and ongoing dialogue about purpose and ethics (Sony & Naik, 2020). By attending to the nexus of people, processes, technology, and environment, organizations can avert the pitfalls of “purely technical” initiatives and create more adaptive, innovative, and socially attuned futures through AI. This perspective is critical in today’s rapidly evolving technological landscape, reminding us that how AI is deployed—and how it harmonizes with human work—is at least as important as the specific functionalities AI can deliver. Building on these socio-technical insights, we now consolidate our research objectives to study AI’s transformative role in academic contexts.
Understanding ai’s impact in academia: research objectives
Considering the perspectives reviewed in the preceding sections, we address the following overarching research questions: How do various stakeholders perceive the impact of AI in academic tasks of teaching and research, considering Ethical Principles and DW Dimensions? How might these perceptions, viewed through the STS Approach, inform the future evolution of teaching and research? To answer these questions, we conduct in-depth interviews, drawing on the perspectives of stakeholders. These allow us to explore how they experience and anticipate the integration of AI into academic tasks.
Method
We explored perceptions on the impact of AI in academic tasks of teaching and research by engaging three complementary groups of participants—AIE, AIEP, and NEP—across three national contexts: Portugal, the Netherlands, and the United States. We specifically selected these three countries not only for their distinct AI technological contexts but also to capture a spectrum of academic norms, resources, and policy environments. Our aim was to gather a thematic synthesis across different academic settings, rather than conducting a comparative cross-national analysis. These countries represent distinct AI landscapes. Portugal offers a perspective from Southern Europe; the Netherlands provides a robust digital infrastructure and progressive policy environment; and the United States is renowned for its global leadership in scientific output and technological innovation. By incorporating these contexts, the study aims to capture a broader spectrum of institutional practices and policy environments that shape AI’s role in higher education.
Participants
Participants were recruited through a snowball sampling strategy in which existing contacts referred additional suitable interviewees. Eligibility criteria were as follows: AIE were required to have a minimum of a Bachelor’s degree in fields such as Computer Science or Engineering and at least three years of professional engagement with AI. AIEP held a Ph.D. in a related field and also had at least three years of AI work experience in academic settings. NEP had a Ph.D. in any field and a minimum of three years of experience in teaching and research without specialized AI expertise. The aim was to capture a range of viewpoints, from those deeply embedded in AI development and instruction to those who might integrate AI tools into academic practice in a more peripheral way.
Data collection tools
The interview guide and interview questions were developed by the researchers, considering both the DW Dimensions (dos Santos, 2019) and the Ethical Principles identified by Khan et al. (2022). The team drew from the most pertinent Ethical Principles outlined by Khan et al. (2022) to safeguard participation rates and data integrity. This selection was balanced against practical constraints, such as the overall length of the interview, to allow for sufficient depth of inquiry without imposing an excessive burden on participants or compromising the richness of the data collected. We developed the materials with the aim of exploring how AI affects teaching and research across key areas such as advantages/disadvantages, adoption factors, and ethical implications (e.g., transparency, privacy, accountability, fairness).
Data collection procedure
Data collection took place in 2024, after approval from the Ethics Committee of the university. Semi-structured interviews were used to gather in-depth perspectives, allowing for both consistency in questioning and flexibility to explore emergent themes. The interview questions were employed uniformly with all three participant groups. Each interview lasted approximately one hour and was conducted either face-to-face or through online communication platforms, depending on participant availability and geographical constraints. Participants received a brief description of the DW Dimensions pertinent to AI integration prior to the interview, enabling them to reflect on these concepts in advance. The interview questions were also shared with participants a few minutes before each session to help them prepare and to stimulate more thoughtful responses.
During each interview, researchers documented participants’ answers in real-time. After each response, the interviewer read back or summarized the key points to the participant, who was then asked to confirm the accuracy of the notes. This immediate verification served as a form of member checking, enhancing the validity of the data. Personal identifiers were removed from the interview notes to protect participants’ identities, and informed consent was obtained from each participant prior to data collection. A total of 28 individuals were interviewed (see Table 1).
A total of 28 participants were interviewed: AIE; n = 7, AIEP; n = 8, and NEP; n = 13. The AIE group included one female and six males (two from the USA, three from the Netherlands [NL], and two from Portugal [PT]). AIEP comprised one female and seven males (one from the USA, three from NL, and four from PT). Finally, NEP contained nine females and four males (six from the USA, three from NL, and four from PT).
Table 1. Participant demographics
Category | Gender | Country |
|---|---|---|
AIE | Female | USA |
AIE | Male | NL |
AIE | Male | NL |
AIE | Male | NL |
AIE | Male | PT |
AIE | Male | PT |
AIE | Male | USA |
AIEP | Male | NL |
AIEP | Male | NL |
AIEP | Male | NL |
AIEP | Male | PT |
AIEP | Male | PT |
AIEP | Male | PT |
AIEP | Male | PT |
AIEP | Female | USA |
NEP | Female | NL |
NEP | Female | PT |
NEP | Female | PT |
NEP | Female | USA |
NEP | Female | USA |
NEP | Female | USA |
NEP | Female | USA |
NEP | Female | USA |
NEP | Female | USA |
NEP | Male | NL |
NEP | Male | NL |
NEP | Male | PT |
NEP | Male | PT |
Note. Participants by expertise category, gender, and country
Data treatment procedure
Once interviews were completed, all responses were organized for analysis. The coding process was structured around a hybrid codebook. Some codes, particularly those about DW Dimensions, Ethical Principles, Impact of AI, and temporal references (current, overall, and prospective), had been identified prior to data collection, guided by the original research questions and conceptual framework. Additional codes emerged inductively from the data, capturing unanticipated themes or nuances raised by participants.
The DW Dimensions addressed in the coding process included aspects such as workload, health and safety, meaningful remuneration, opportunities, principles and values at work, and social protection. Ethical Principles in AI usage were examined in terms of accountability, fairness, privacy, regulations, and transparency. To further elucidate how AI impacts academic work, the data were also classified under two main sets of tasks—research tasks and teaching tasks—each examined for advantages, disadvantages, factors influencing adoption, and types of AI employed. References to specific timeframes allowed the research team to distinguish between present-day applications and future projections of AI use in academia.
Following the initial coding, the researchers performed sentiment analyses to gauge the general attitudes—positive, negative, or ambivalent—expressed by participants in each country regarding AI’s impact on their work and their broader academic environments (McGill, 2023). Based on coding similarities, a cluster analysis was conducted for each interview transcript (Jaccard Coefficient, McGill, 2023). This procedure grouped segments of coded text that shared thematic or conceptual similarities, enabling the researchers to identify patterns and relationships. Finally, the clusters were characterized and then interpreted in light of the STS approach.
Results and discussion
Researchers began by establishing a coding system, wherein the main codes were developed by three members of the research team after discussing the content together. Embedded codes emerged from participant interviews, culminating in four top-level categories—DW Dimensions, Ethical Principles, Impact of AI, and Time—and their respective subthemes. A total of 86 codes were included. Global results reveal a recurring focus on work conditions, ethical considerations, functional applications, and temporal implications of AI. Participant data was examined by gender, and no salient differences in AI perceptions or concerns were noted. Both male and female participants voiced similar points about efficiency, ethics, and institutional support. Future work with a more gender-balanced or larger sample might reveal subtler patterns.
Sentiment analysis
Sentiment analysis was performed on all data (Fig. 1). Overall, the results show that while many interviewees remain neutral about AI’s impact on teaching and research tasks, substantial segments voiced either predominantly positive or negative views. Positive sentiments typically centered on the potential for efficiency gains and enhanced learning experiences, whereas negative comments involved concerns over ethical issues and resource inequality. A notable portion of responses fell into a mixed category, reflecting ambivalence or simultaneous optimism and apprehension about AI’s broader consequences for academia.
[See PDF for image]
Fig. 1
Sentiment Analysis
Code frequency by cluster
In Fig. 2, we observe all clusters categorized by color containing the participant documents. Figure 3 includes answers from every participant across prospective and current impacts in nearly all clusters, with only one exception in the “Overall” category of cluster 6.
Exploring the impact of AI on teaching tasks (Fig. 4), we have observed that in Cluster Six, all participants emphasized safety and risk management, whereas no participants in other clusters mentioned these issues. Interestingly, the theme of usefulness went unaddressed in Cluster Two but surfaced consistently among all other clusters.
[See PDF for image]
Fig. 2
Clusters’ Categories, Participants and References Note. Cluster 1 – Optimists; Cluster 2 – Moderates; Cluster 3 – Dreamers; Cluster 4 – Cautious Skeptics; Cluster 5 – Expansionists; Cluster 6 – Knowledgeable; Cluster 7 – Strategists
[See PDF for image]
Fig. 3
Clusters Overview - Impact of AI Throughout Time. Note. Cluster 1 – Optimists; Cluster 2 – Moderates; Cluster 3 – Dreamers; Cluster 4 – Cautious Skeptics; Cluster 5 – Expansionists; Cluster 6 – Knowledgeable; Cluster 7 – Strategists
[See PDF for image]
Fig. 4
Clusters Overview - Impact of AI on Teaching Tasks Note. Cluster 1 – Optimists; Cluster 2 – Moderates; Cluster 3 – Dreamers; Cluster 4 – Cautious Skeptics; Cluster 5 – Expansionists; Cluster 6 – Knowledgeable; Cluster 7 – Strategists
Regarding unknown types of AI in research tasks (Fig. 5), most clusters did not discuss them at all. Meanwhile, when it comes to barriers to adoption and innovation, Clusters One, Two, and Seven did not address these challenges, while Clusters Three, Five, and Six reported them as a shared concern.
This divergence suggests that some groups are more engaged with the practical obstacles of AI integration than others, potentially influencing how AI is implemented in teaching strategies and curricula.
[See PDF for image]
Fig. 5
Clusters Overview - Impact of AI on Research Tasks Note. Cluster 1 – Optimists; Cluster 2 – Moderates; Cluster 3 – Dreamers; Cluster 4 – Cautious Skeptics; Cluster 5 – Expansionists; Cluster 6 – Knowledgeable; Cluster 7 – Strategists
Regarding DW (Fig. 6), it was only in Cluster 2 that all participants raised issues related to workforce aging. The consensus across almost all clusters, however, was task mechanization, which emerged as a highly prevailing theme, indicating that many educators and researchers are aware of AI’s potential to automate or transform certain duties within academic and professional settings.
[See PDF for image]
Fig. 6
Clusters Overview – Impact of AI on DW Dimensions Note. Cluster 1 – Optimists; Cluster 2 – Moderates; Cluster 3 – Dreamers; Cluster 4 – Cautious Skeptics; Cluster 5 – Expansionists; Cluster 6 – Knowledgeable; Cluster 7 – Strategists
Turning to the broader impact of AI on academic tasks, we see similar variability in how different groups perceive and adopt AI tools and processes. This gap is further evident in the discussion of Ethical Principles (Fig. 7): Clusters One, Two, and Three did not address half of the ethical principles, with Cluster Three nearly entirely silent on the subject. The absence of ethical discourse emphasizes the need for greater awareness and guidance on issues such as accountability, fairness, and privacy. From the evidence at hand, it is clear that while AI holds promise for both teaching and research tasks, a wide range of perspectives and experiences exists. This disparity highlights the importance of discussions about safety, usefulness, and ethics, as well as the need to bridge the knowledge gap between early adopters and those who may be slower to recognize the potential benefits and challenges of AI.
[See PDF for image]
Fig. 7
Clusters Overview – Ethical Principles Note. Cluster 1 – Optimists; Cluster 2 – Moderates; Cluster 3 – Dreamers; Cluster 4 – Cautious Skeptics; Cluster 5 – Expansionists; Cluster 6 – Knowledgeable; Cluster 7 – Strategists
Cluster creation and analysis
We performed a cluster analysis of these codes. These characterizations stem from a systematic cluster analysis of code co-occurrences—quantified by the Jaccard coefficient—that distilled participant responses into distinct thematic groupings, reflecting varying perspectives on AI’s academic impact. Below, we go into each one of the clusters and describe their characteristics. We examined both the frequency and sentiment of coded segments and then synthesized these findings with direct participant excerpts to capture each cluster’s overarching attitude. We also discussed the resulting cluster names within our research team. This process allowed us to assign descriptive labels that reflect the dominant perspectives—such as recurring emphasis on AI’s benefits for ‘The Optimists’—while ensuring consistency between the coded data and the resulting cluster names. Table 2 offers a concise overview of the seven clusters derived from the analysis, highlighting the defining traits that characterize participants’ perspectives.
Table 2. 7 Clusters’ summary
Cluster | Defining Approach to AI | Key Themes |
|---|---|---|
1. The Optimists | Enthusiastic about immediate benefits and quick efficiencies | Emphasis on large language models for daily tasks, focus on practical gains, slight concern for inequality or overreliance |
2. The Moderates | Balanced stance, weighing AI’s benefits against ethical and regulatory gaps | Forward-looking yet cautious, concerns about data privacy and resource imbalances, pragmatic acceptance of AI’s advantages |
3. The Dreamers | Visionary approach, imagining AI-driven transformations in pedagogy | Deeper influence on creativity, cultural norms, and educational innovation, hopeful that AI can enhance curiosity and equity |
4. The Cautious Skeptics | Practical yet vigilant about AI’s pitfalls, emphasizing human oversight | Apprehension about overreliance, stress on policy voids and trust issues, insistence on critical checks and training |
5. The Expansionists | AI as a catalyst to redefine professional norms, equity, and workplace design | Strong emphasis on ethics, policy, data management, and entrepreneurship; seeing AI’s potential to expand opportunities |
6. The Knowledgeable | Broad AI literacy, referencing many aspects from privacy to institutional reform | Significant focus on data handling, robust training, day-to-day workload improvements, balancing automation with human input |
7. The Strategists | Structural, policy-driven viewpoint on AI’s institutional impact | Call for frameworks that ensure equity, fair pay, and responsible deployment, coupled with recognition of AI’s efficiency gains |
Note. Clusters’ distinct viewpoints on AI’s role in academia, summarized by its defining approach and key themes
Cluster 1 (the optimists)
Cluster 1 includes PT2 (Portugal, EP, Male) and PT3 (Portugal, EP, Male). Characterized as “Optimists,” they chiefly focus on immediate, practical uses of AI for daily academic tasks, highlighting quick gains in efficiency and convenience over broader institutional or regulatory concerns.
In terms of Time—encompassing Current, Overall, and Prospective—they emphasize AI’s present value for tasks like translation and grammar checks, see it as gradually accepted in broader culture, and expect it to become an inevitable part of future academic workflows. While they celebrate AI’s growing utility, they also note concerns about resource inequalities or overreliance that could surface down the line.
Impacts on teaching tasks
For Teaching Tasks, participants praise Large Language Models for generating lesson plans and synthesizing content. They also caution that these tools can misinform if used without critical oversight. They describe Usefulness as accelerating searches and bridging knowledge gaps. Under Technological and Operational Factors, they empathize the importance of solid infrastructure and training for effective AI deployment. Lacking these resources undermines engagement and tool reliability. Regarding Cultural and Social Dynamics, participants see AI acceptance as deeply influenced by national attitudes and academic norms. Where supportive cultures exist, AI integration happens faster and with less skepticism. Their Ethical and Social Concerns focus on misinformation risks and responsible classroom usage. Instances of misuse prompt a collective call for vigilant oversight to preserve trust. Finally, Educational Enhancements include better literature reviews and personalized materials, which can elevate teaching quality. However, they insist human judgment and creativity must remain central to truly enrich learning.
Impacts on research tasks
Turning to Research Tasks, participants mention Others for specialized tools that go beyond well-known chatbots, such as AI-driven translators and data query systems. These niche solutions help with grammar checks and database searches but still require user expertise. When using Large Language Models, they highlight gains in advanced text processing for literature reviews or complex queries. At the same time, uneven access in lower-resourced settings poses a concern. General Machine Learning remains part of their toolkit for classification and analytics, regarded as a longstanding approach that predates today’s chatbots. They stress these models can still be highly effective when aligned with researchers’ domain knowledge. Under Technological and Operational Factors, participants note that robust computing resources and the right skill set are essential. Without institutional support or training, AI adoption stalls or originates inconsistent results. Cultural and Social Dynamics shows they see AI’s acceptance flow with societal attitudes and academic trends. Enthusiasm can surge or fade based on broader public perception, even if the tools themselves remain technically sound.
Ethical principles and decent work dimensions
Within DW Dimensions, they discuss Task Mechanization (Adequate Working Time and Workload) as partially automating duties like editing or literature scanning. Although AI lifts repetitive burdens, humans remain indispensable for keeping an eye on things and making nuanced decisions, ensuring quality control and ethical integrity.
Cluster 2 (the moderates)
Cluster 2 includes USA2 (USA, E, Female) and USA3 (USA, E, Female). As moderates, they balance AI’s forward-looking promises (efficiency, creativity) with persistent concerns about ethics, regulation, and data privacy, indicating neither extreme enthusiasm nor outright apprehension.
For Time—Current, Overall, and Prospective—they address how AI is already improving classroom or research workflows, reflect on broad societal impacts like data-handling risks, and anticipate more structured oversight in the future.
Impacts on teaching tasks
In Teaching Tasks, participants consistently discuss Large Language Models as powerful but riddled with potential pitfalls. They see these tools boosting productivity and idea generation, yet remain cautious about inaccuracies or ethical misuse. When talking about Socio-Economic Impacts, they highlight disparities in access to advanced AI platforms, worrying that well-funded institutions will thrive while others lag. Still, AI can be a leveling force if distributed and regulated fairly. Their Ethical and Social Concerns revolve around protecting data, minimizing bias, and maintaining a humane teacher–student dynamic. They feel strong oversight can keep AI from overshadowing genuine human interaction. Under Educational and Cognitive Impacts, participants warn about AI reducing critical thinking if used as a shortcut. Yet, they acknowledge how adaptive systems can enhance individualized learning when properly monitored. Regarding Efficiency and Productivity Enhancements, participants agree AI tools streamline lesson planning and content creation. However, they believe teachers must remain vigilant to avoid letting these efficiencies negatively impact genuine pedagogical engagement.
Impacts on research tasks
Concerning Research Tasks, they see Technical and Functional Limitations as substantial: large-scale AI can make mistakes without rigorous checks, and purely automated insights may lack depth. They also note Socio-Economic Impacts, from potential job displacement to inequitable funding for expensive AI tools, emphasizing the need for broader policy attention. Under Efficiency and Automation, they applaud how AI accelerates data analysis and reduces repetitive tasks, freeing time for deeper research inquiries. Still, they question whether current infrastructures adequately protect from inaccuracies or overhype.
Ethical principles and decent work dimensions
For Ethical Principles, Data Handling arises as a central theme, highlighting privacy, potential breaches, and the moral responsibility to protect personal information. On Justice and Equality, participants stress that unregulated AI could replicate existing societal biases or widen resource gaps. Responsibility remains with human users, who must interpret, refine, and validate AI outputs rather than delegating blame to technology. Under Employment (Social Protection), they discuss job security for older faculty or staff, acknowledging new skill requirements in an AI-driven landscape. On Risk Management (Health and Safety), they mention AI’s role in hazard detection but warn that mistakes in critical environments—like healthcare—could be catastrophic without human review. Finally, Work Rehumanization and Task Automation (Fulfilling and Productive Work) highlight how offloading mundane chores can let people focus on more meaningful tasks. Yet, participants emphasize humans must stay central, ensuring that AI acts as an assistant rather than a replacement.
Cluster 3 (the dreamers)
Cluster 3 includes PT4 (Portugal, E, Male), NL9 (Netherlands, P, Male), and USA4 (USA, EP, Female). As “Dreamers,” they dive into AI’s deeper influence on pedagogy, creativity, and cultural norms, highlighting possible paradigm shifts beyond simple task efficiencies.
In discussing Time (Current, Overall, Prospective), they note how AI’s immediate classroom utility foreshadows bigger transformations in how educators envision their roles, from delivering content to orchestrating creative or collaborative learning.
Impacts on teaching tasks
Regarding Teaching Tasks, they give special attention to Large Language Models, which they see not just as productivity tools but as catalysts for reimagining lesson design and critical thought exercises. They do, however, voice concern that students might rely on AI-generated outlines or narratives too heavily, risking intellectual complacency. With respect to Socio-Economic Impacts, participants note potential divides if advanced AI remains costly, but remain hopeful that democratized tools could reduce global educational inequalities. They frame Ethical and Social Concerns around fairness and authenticity, describing scenarios where AI might overshadow genuine human discourse if left unchecked. On Educational and Cognitive Impacts, they perceive AI as transformative for creativity and curiosity, although it must not replace human-driven inquiry and problem-solving. Discussions of Efficiency and Productivity Enhancements highlight time savings on administrative tasks, which they see as freeing educators to experiment and mentor more holistically.
Impacts on research tasks
Shifting to Research Tasks, Other references niche AI tools for brainstorming or literature mapping. The cluster sees such solutions sparking imaginative leaps, even if these tools are overshadowed by bigger chatbots. Innovation and Creativity Enhancement stands out: participants repeatedly mention AI’s capacity to provoke bold ideas or reveal novel research angles. Still, they recognize the danger of fixating on AI “surprises” without applying rigorous human analysis. They bring up Ethical and Accountability Concerns, highlighting that true innovation must keep moral obligations front and center—AI can propose unvetted theories, but researchers remain accountable for verifying them. Finally, Barriers to Adoption & Innovation revolve more around cultural attitudes (like fear or hype) than purely technical constraints, suggesting that broad acceptance of AI’s creative potential depends on a supportive intellectual climate.
Ethical principles and decent work dimensions
Throughout DW Dimensions, they see Work Rehumanization (Fulfilling and Productive Work) as a genuine possibility when AI takes over mundane tasks, letting educators and researchers explore deeper, more human-centered activities. They also stress that Professional Changes (Adequate Working Time and Workload) must be approached thoughtfully: AI can expand academic roles into realms of mentorship, creativity, and global collaboration, but only if institutions focus on training and open-mindedness.
Cluster 4 (the cautious skeptics)
Cluster 4 comprises USA7 (USA, E, Female), NL6 (Netherlands, E, Male) and USA8 (USA, E, Male). As “Cautious Skeptics,” their perspective emphasizes AI’s immediate practicality while constantly questioning over-reliance, reminding us that human expertise remains indispensable.
On Time—Current, Overall, Prospective—participants focus heavily on present-day applications like language models and advanced search engines, which are already taking over mundane tasks in teaching. Yet they see their broader impact on academia as uncertain, expecting a slow shift if trust issues and policy voids remain unresolved.
Impacts on teaching tasks
For Teaching Tasks, User Friendliness appears as crucial: AI tools can be helpful but often require prompt engineering or technical know-how that not all educators possess. Participants acknowledge Usefulness in offloading repetitive chores, though they warn that incorrectly set expectations about AI’s capabilities might backfire. When Cultural and Social Dynamics come into play, participants stress that departmental norms and personal attitudes shape whether AI is widely adopted. Efficiency and Productivity Enhancements ring true for tasks like quiz or resource generation, but they repeatedly point out that final oversight must stay with the professor.
Ethical principles and decent work dimensions
Meanwhile, General Machine Learning and other AI solutions do simplify data analysis, but the discussion hints at Ethical and Accountability Concerns if professors blindly trust outputs. They see partial Innovation and Creativity Enhancement, although caution prevails: AI can spark new approaches but does not replace human insight. The code Data Handling resonates strongly, as participants highlight the importance of privacy measures when AI deals with sensitive student information. On Equality and Responsibility, they insist that fairness depends on transparent usage guidelines and that ultimate responsibility for final decisions cannot be delegated. Discussions about Employment revolve around the risk that full automation could undercut teaching roles, leading them to advocate for upskilling as a consequence. Participants interpret Risk Management as building checks into these systems so that errors do not cascade into unjust outcomes. Work Rehumanization and Task Automation come up as dual possibilities: AI could strip away routine tasks, letting educators focus on interpersonal teaching, or it might reduce personal contact if institutions chase cost savings. They see Task Mechanization as the more realistic near-term path, in which AI assists but does not supplant faculty roles, while Professional Changes require careful institutional support to avoid marginalizing those slower to adapt.
Cluster 5 (the expansionists)
Cluster 5 includes NL1 (Netherlands, EP, Male), NL3 (Netherlands, P, Male), USA6 (USA, P, Female), and NL2 (Netherlands, E, Male). As “Expansionists,” they link together a series of themes—ethics, policy, data management, entrepreneurship—viewing AI as a catalyst for rethinking professional norms, equity, and workplace arrangements.
On Time—Current, Overall, and Prospective—they discuss how AI is already reshaping routines and foresee far-reaching changes, from job structures to emerging academic standards. Their broader lens also evaluates how AI’s normalization in daily tasks might shift cultural values and spark entrepreneurial initiatives.
Impacts on teaching tasks
Concerning Teaching Tasks, they emphasize Large Language Models as widely adopted yet still prone to misinformation and bias. Many participants value these models for generating examples, translating content, and tailoring lessons, but they emphasize that educators must remain vigilant. Under Efficiency and Productivity Enhancements, the participants celebrate swift content creation but note it can raise workplace expectations, pressuring professors to do more in less time. Their focus on Educational Enhancements shows a collective hope that AI can enrich learning experiences—yet they reiterate that real learning occurs when human teachers refine AI outputs to match student needs. Discussions around Others highlight a range of specialized AI tools in research, from grammar checkers to data-mining bots, each presenting new ways to accelerate scholarship. With General Machine Learning in the mix, they still acknowledge older algorithms remain crucial in many labs. Technological and Operational Factors emphasize that staff must be trained and that institutions need supportive policies for effective AI usage or risk patchy outcomes. Meanwhile, Cultural and Social Dynamics confirm that acceptance is not purely technical—perceptions, traditions, and cross-institution relationships matter greatly.
Ethical principles and decent work dimensions
Throughout DW Dimensions, participants discuss Knowledge Democratization and Entrepreneurship at length, seeing AI as a route to expand access and drive innovation. They highlight Technological Investment as a double-edged sword: underfunded regions may fall behind, while those that invest see swift gains. On Fair Pay and Risk Management, participants debate how cost savings from AI might (or might not) translate into equitable wages or safer work environments. Yet they remain optimistic about Work Rehumanization—the idea that removing mundane tasks can let people engage in more fulfilling work. Task Automation spares humans from repetitious chores, though participants fear a slippery slope toward de-skilling. Ultimately, they see Professional Changes as inevitable, urging institutions to shape these shifts so that AI fosters rather than fragments the academic workforce.
Cluster 6 (the knowledgeable)
Cluster 6 includes PT6 (Portugal, P, Female), PT5 (Portugal, P, Male), USA9 (USA, P, Female), PT9 (Portugal, P, Female), PT10 (Portugal, EP, Male), PT7 (Portugal, P, Male) and PT8 (Portugal, P, Male). As “The Knowledgeable,” they exhibit broad AI literacy, referencing nearly every aspect of AI’s potential impacts on privacy, teaching innovations, job markets, and institutional reforms.
Discussing Time—Current, Overall, Prospective—they see AI as already ubiquitous in academia, from chat-based feedback to data analytics, while anticipating more advanced systems that could reshape entire research fields. They maintain that ongoing evolutions demand vigilant adaptation, not complacency or moral panic.
Impacts on teaching tasks
For Teaching Tasks, they talk about Large Language Models as routine aids for creating prompts or clarifying concepts, pointing out that well-designed prompts can significantly improve output quality. Under Usefulness, they commend AI’s capacity to streamline routine tasks and free educators to mentor or develop creative curricula. Technological and Operational Factors surface repeatedly: these participants stress that training and platform integration are vital to success, and that institutions must budget realistically for them. Cultural and Social Dynamics also matter, as open-minded departments adopt AI faster, whereas more traditional ones remain hesitant. On Efficiency and Productivity Enhancements, they see real gains if teachers remain critical of AI’s flaws. Over-trusting AI, however, can produce unreliable short-cuts.
Impacts on research tasks
Shifting to Research Tasks, they mention Ethical and Legal Concerns extensively, citing the risk of plagiarism, need for data protection, and appropriate authorship. Meanwhile, Cultural and Social Dynamics persist: these participants believe communities that share knowledge and encourage collaborative experimentation with AI are best positioned to benefit. Technical and Functional Limitations include both interpretability issues and the risk that AI might falter without up-to-date training data. Under Socio-Economic Impacts, they speak of job displacement alongside newly created roles, concluding that robust continuing education programs can mitigate negative effects. Ethical and Accountability Concerns remain central, asserting that researchers must thoroughly assess AI-driven analyses or face retractions and ethical breaches. They also cite Barriers to Adoption & Innovation, such as costs, cultural hang-ups, or confusion over intellectual property, urging institutions to clarify guidelines so curiosity can flourish. Alongside Efficiency & Automation, they note a risk that researchers might skip deeper analysis if AI outputs are simply accepted.
Ethical principles and decent work dimensions
Finally, their outlook on DW Dimensions includes discussing Risk Management to ensure AI’s errors do not harm individuals or institutions, Work Rehumanization where AI offsets mundane labor, and Task Mechanization as a balanced approach that harnesses automation without removing human oversight. They reference Professional Changes that demand upskilling, insisting these transitions are not merely technical but also ethical, emphasizing the importance of preserving human judgment.
Cluster 7 (the strategists)
Cluster 7 includes NL4 (Netherlands, P, Male), NL7 (Netherlands, E, Male), USA1 (USA, EP, Female), NL8 (Netherlands, P, Female), USA4 (USA, EP, Male), NL5 (Netherlands, EP, Male) and PT1 (Portugal, EP, Male). As “The Strategists,” they highlight the structural, policy-driven side of AI, highlighting the need for frameworks that promote justice, equality, and responsible deployment while still acknowledging efficiency gains.
They look at Time (Current, Overall, and Prospective) as a continuum in which AI’s adoption expands steadily, although unevenly. Each participant expects near-future developments to intensify debates over equity, pay structures, and job security, believing strategic oversight must keep pace.
Impacts on teaching tasks
Regarding Teaching Tasks, Large Language Models are praised for generating resources and supporting instructors. However, they see potential cultural and institutional barriers, prompting calls for standards that explicitly define safe and ethical usage. Usefulness is undisputed, but these participants want firm policies preventing misuse or deception in assignments and classroom materials. Cultural and Social Dynamics highlight how institutional leadership and shared norms can transform AI from a novelty into a well-integrated teaching tool. Efficiency and Productivity Enhancements remain strong motivators—less busywork means more creative or student-focused approaches—but only if teachers preserve autonomy and set boundaries. They believe an overreliance on AI risks reducing teaching to administrative supervision.
Impacts on research tasks
Under Research Tasks, Technological and Operational factors loom large: they argue that universities and labs need consistent guidelines to handle data and define best practices. They also mention Technical and Functional Limitations, believing AI’s black-box tendencies can erode trust if not addressed by policy. On Efficiency and Automation, these participants encourage harnessing AI for routine analysis to free researchers for conceptual breakthroughs. Yet they also call for rigorous checks to avoid letting speed trump quality.
Ethical principles and decent work dimensions
Data Handling stands out as a pressing concern—large data sets enable advanced AI but raise serious privacy and ethical questions. Within Justice and Equality, they demand equitable access to AI technology and fear that unregulated usage can deepen social or financial disparities. Fair Pay arises too: institutions must ensure cost savings from AI do not undercut academic wages. Task Mechanization serves as the foundation of their strategy: partial automation can enhance productivity but still requires human insight, ensuring that a reimagined workplace remains humane and fair to all.
Comparative insights: distinguishing AIE, AIEP, and NEP perspectives
To illuminate how expertise shapes perceptions of AI, we examined whether participant category—AI AIE, AIEP, or NEP—influenced cluster membership and emergent themes. Notably, the “Cautious Skeptics” (Cluster 4) and “Moderates” (Cluster 2) were exclusively AIEs, highlighting how some AI experts emphasize caution and regulatory gaps (Cluster 4) while others strike a balance between AI’s creative advantages and ethical pitfalls (Cluster 2). By contrast, the “Optimists” (Cluster 1) were entirely AIEPs, primarily highlighting AI’s immediate benefits for academic tasks. Clusters with mixed membership—such as “Expansionists” (Cluster 5), “Dreamers” (Cluster 3), and “Strategists” (Cluster 7)—tended to feature discussions of policy, equity, and deeper pedagogical transformations, suggesting that the combination of AI expertise and teaching experience drives a broader, more future-oriented dialogue. Meanwhile, the “Knowledgeable” (Cluster 6) was predominantly NEPs plus one AIEP, reflecting strong awareness of AI’s technical possibilities yet foregrounding practical concerns (e.g., data handling, day-to-day workload). Overall, AI experts—whether professors or not—often voiced nuanced ethical and accountability considerations, whereas NEPs consistently stressed resource constraints and the need for institutional guidance. This comparative lens shows that domain expertise conditions both the optimism and skepticism surrounding AI integration, with AIEs more likely to highlight hidden pitfalls and regulatory blind spots, while AIEPs and NEPs concentrate on balancing efficiency gains against pedagogical integrity and institutional realities.
The socio-technical systems approach
By mapping the emergent themes identified through clustering onto established STS principles—such as joint optimization, boundary management, and minimal critical specification—researchers were able to evaluate how each cluster’s perspectives on AI adoption and implementation align with or diverge from core socio-technical constructs. Across all clusters, a strong alignment with the STS Approach was found. Cluster 1 (The Optimists) illustrates key STS principles of variance control and minimal critical specification by leveraging AI for immediate, practical tasks (e.g., grammar checks) while acknowledging that resource inequalities undermine these tools’ benefits. Cluster 2 (The Moderates) echoes joint optimization—they emphasize AI’s value for efficiency and creativity yet highlight ethical, regulatory, and data-privacy challenges, reflecting STS’s need for boundary management and alignment of technology with social norms and policies (Sony & Naik, 2020). Cluster 3 (The Dreamers) evokes STS’s open systems thinking by suggesting deeper transformations in pedagogy and creativity, thus championing designing for human values and environmental embeddedness, ensuring that AI democratizes rather than restricts knowledge. Cluster 4 (The Cautious Skeptics) reveals the importance of support congruence and minimal critical specification: participants warn of overreliance on AI and recommend checks and training so humans still oversee critical decisions. Cluster 5 (The Expansionists) portrays boundary spanning—they propose AI’s catalytic role across ethics, policy, and entrepreneurship while calling for robust infrastructure, training, and regulatory fairness, echoing STS imperatives of knowledge democratization and social-technical alignment (Makarius et al., 2020). Cluster 6 (The Knowledgeable) reflects advanced STS concepts of information flow and multiskilling, advocating for broad AI literacy so employees can handle ethical or interpretability issues while ensuring ongoing learning to maintain joint optimization (Sony & Naik, 2020). Finally, Cluster 7 (The Strategists) highlights boundary management and support congruence, demanding frameworks to promote justice, equality, and responsible AI deployment, all while preserving efficiency gains and work rehumanization, consistent with STS’s continuing emphasis on design for both people and technology (Appelbaum, 1997).
Conclusion
Our study set out to examine (RQ1) how various academic and AI expert communities perceive the impact of AI on teaching and research tasks, given the lens of Ethical Principles and DW Dimensions, and (RQ2) how these perceptions, informed by the STS Approach, might shape the future evolution of academic work. By employing semi-structured interviews across three participant groups—AIE, AIEP, and NEP—and systematically analyzing the data through coding, sentiment analysis, and a clustering procedure, we identified seven distinct “clusters” of attitudes that capture a broad spectrum of perspectives. From enthusiastic “Optimists” and introspective “Moderates” to visionary “Dreamers,” each cluster articulates both the immediate practical gains and the deeper institutional, ethical, and pedagogical transformations at stake. While some participants highlight the immediate efficiencies of Large Language Models for teaching and research, others raise concerns about fairness, data privacy, and regulatory gaps that could undermine trust. The findings further reveal that contextual elements—ranging from work culture to institutional policy—can profoundly influence how these tools are applied and perceived and can impact whether they are integrated in a way that supports truly “decent” academic work. Addressing RQ1, the findings confirm that AI’s role in academia is both widespread and nuanced. Large Language Models emerge as the most prominent tools, but the adoption and perceived impact differ markedly depending on user skill sets, cultural attitudes, and policy frameworks. Responding to RQ2, our application of the STS Approach suggests that the interplay of technical and social subsystems is critical for sustainable AI adoption in higher education. Across all clusters, participants call for thoughtful policy frameworks, institutional guidelines, and equitable resource distribution to avoid exacerbating inequalities or undermining academic values. Aligning with the Decent Work dimensions, many emphasize that dignity, balanced workloads, and opportunities for professional development are indispensable as routine tasks become increasingly automated.
This research confirms that AI’s role in academia is widespread, with Large Language Models leading the way as the most popular technology. Additionally, AI’s impact appears to be neither rigid nor uncontroversial. Instead, it is shaped by socio-technical factors such as institutional training, user skill sets, funding, and cultural norms, as well as Ethical Principles like accountability, fairness, and transparency. These insights emphasize the importance of a deliberate, context-sensitive approach to AI integration, one that aligns technology with human values and systemic safeguards. Looking ahead, the prevailing sentiment across clusters is cautiously optimistic: participants generally acknowledge AI’s capacity to enhance efficiency and creativity but emphasize that robust policies, training, ethical guardrails, and equitable access are necessary to ensure just and meaningful outcomes for teachers, researchers, and students alike. Ultimately, this study’s results serve as a foundation for future research and practical frameworks that can guide institutions toward successful AI adoption in contexts beyond the studied countries. By exploring the ethical and organizational implications of AI in academia, this study offers insights that support institutional policy development and practice transformation. These contributions align with SDG 8 of the UN 2030 Agenda, promoting decent work, inclusive innovation, and responsible technological integration in higher education.
Building on our findings, several practical steps can guide higher education institutions in responsibly and effectively integrating AI into teaching and research. Universities should create clear AI policies, detailing responsibilities, data protection, and acceptable uses. Structured training programs will help faculty gain critical AI literacy, fostering appropriate adoption. Equitable resource allocation—such as computing infrastructure and licenses—ensures AI benefits all academic staff and students. Regular ethical audits and stakeholder consultations can mitigate risks to workload, authorship, and work quality. Finally, aligning AI practices with core academic values preserves human oversight, reinforces Decent Work standards, and sustains an inclusive learning environment.
This research contributes to a growing body of scholarship at the intersection of higher education, technology, and labor ethics, offering a multi-faceted view of AI’s influence on academia. By linking together the STS Approach and the DW dimensions, the study illuminates both the ethical implications and the systemic drivers of AI adoption. Specifically, STS highlights the need to balance technical and social considerations—such as institutional policies, training, and cultural norms—whereas the DW dimensions spotlights dignity, fairness, and human rights in academic labor. These combined perspectives offer insights that are directly relevant to educational technology specialists, AI ethicists, and policymakers committed to shaping responsible and equitable AI integration in higher education.
Acknowledgements
We would like to express our gratitude to Damion Verboom for his invaluable assistance in supporting during the creation of the data visualizations used in our cluster analysis and sentiment analysis. His expertise enhanced the clarity and impact of our findings, and we are deeply appreciative of his contributions to this research.
Author contributions
ADR collected and analyzed the data, wrote the manuscript, and prepared the figures and tables. LP supervised the research, contributed to the research design and data collection, and critically reviewed the manuscript. FZ contributed to constructing the research, provided sources, and reviewed the manuscript. FO provided feedback on the research methodology, contributed to data interpretation, and reviewed the manuscript. NRS supported the qualitative data analysis and critically reviewed the final document. All authors read and approved the final manuscript.
Funding
This research was funded by the Tokyo Foundation’s Ryoichi Sasakawa Young Leaders Fellowship Fund (SYLFF).
Data availability
The datasets generated and/or analysed during the current study are not publicly available due to GDPR data protection regulations and participant confidentiality agreements. Data sharing is not possible as per the research ethics committee approval which prohibits any form of data distribution outside the research team.
Declarations
Competing interests
The authors declare that they have no competing interests.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Appelbaum, SH. Socio-technical systems theory: An intervention strategy for organizational development. Management Decision; 1997; 35,
Andras, P., Esterle, L., Guckert, M., Han, T. A., Lewis, P. R., Milanovic, K., Payne, T., Perret, C., Pitt, J., Powers, S. T., Urquhart, N., & Wells, S. (2018). Trusting intelligent machines: Deepening trust within socio-technical systems. IEEE Technology and Society Magazine, 37(4), 76–83. https://doi.org/10.1109/MTS.2018.2876107
Balta, N. Ethical considerations in using AI in educational research. Journal of Research in Didactical Sciences; 2023; 2,
Bansal, G; Heath, D. Ten propositions on codepdence of AI and AI ethical framework adoption: View from industry and academia. Journal of Information Technology Case and Application Research; 2023; 25,
Barrios Tao, H; Pérez, VRD; Guerra, YM. ARTIFICIAL INTELLIGENCE AND EDUCATION: Challenges and disadvantages for the teacher 1. Arctic Medical Research; 2019; 72,
Bearman, M., Ryan, J., & Ajjawi, R. (2022). Discourses of artificial intelligence in higher education: A critical literature review. Higher Education. Advance online publication. https://doi.org/10.1007/s10734-022-00937-2
Berens, P; Cranmer, K; Lawrence, ND; von Luxburg, U; Montgomery, J. AI for science: An emerging agenda. ArXiv; 2023; [DOI: https://dx.doi.org/10.48550/ARXIV.2303.04217]
Bhardwaj, G; Singh, SV; Kumar, V. An empirical study of artificial intelligence and its impact on human resource functions. 2020 International Conference on Computation Automation and Knowledge Management (ICCAKM); 2020; [DOI: https://dx.doi.org/10.1109/iccakm46823.2020.9051544]
Bjola, C. AI for development: Implications for theory and practice. Oxford Development Studies; 2021; 50,
Braganza, A; Chen, W; Canhoto, A; Sap, S. Productive employment and decent work: The impact of AI adoption on psychological contracts, job engagement and employee trust. Journal of Business Research; 2021; 131,
Calvert, S., Kennedy, M. L., Lynch, C., & O’Brien, J. (2020). Emerging Technologies for Research and Learning: Interviews with Experts. Association of Research Libraries, Coalition for Networked Information, and EDUCAUSE. https://doi.org/10.29242/report.emergingtech2020.interviews
Chinta, S. V., Wang, Z., Yin, Z., Hoang, N., Gonzalez, M., Le Quy, T., & Zhang, W. (2024). FairAIED: Navigating fairness, bias, and ethics in educational AI applications [arXiv:2407.18745]. arXiv. https://arxiv.org/abs/2407.18745
Deshpande, A., Picken, N., Kunertova, L., De Silva, A., Lanfredi, G., & Hofman, J. (2021). Improving working conditions using Artificial Intelligence. Policy Department for Economic, Scientific and Quality of Life Policies, European Parliament. https://www.europarl.europa.eu/RegData/etudes/STUD/2021/662911/IPOL_STU(2021)662911_EN.pdf
Dos Santos, NR. Decent work expressing universal values and respecting cultural diversity: Propositions for intervention. Psychologica; 2019; 62,
Eacersall, D., Pretorius, L., Smirnov, I., Spray, E., Illingworth, S., Chugh, R., Strydom, S., Stratton-Maher, D., Simmons, J., Jennings, I., Roux, R., Kamrowski, R., Downie, A., Thong, C. L., & Howell, K. A. (2024). Navigating ethical challenges in generative AI-enhanced research: The ETHICAL framework for responsible generative AI use. ArXiv.
Ferraro, T; Pais, L; dos Santos, NR; Moreira, JM. The decent work questionnaire: Development and validation in two samples of knowledge workers. International Labour Review; 2018; 157,
Ghosh, K., & Sadeghian, S. (2024). The impact of AI on perceived job decency and meaningfulness: A case study. In Symposium on Human-Computer Interaction for Work (CHIWORK ‘24), June 23–27, 2024, Newcastle-upon-Tyne, UK. ACM. https://doi.org/10.48550/arXiv.2406.14273
Herrmann, T., & Pfeifer, S. (2023). Keeping the organization in the loop: A socio-technical extension of human-centered artificial intelligence. AI & Society, 38, 1523–1542. https://doi.org/10.1007/s00146-022-01391-5
High-Level Expert Group on Artificial Intelligence (2019). A definition of AI: Main capabilities and scientific disciplines. European Commission. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence
Khan, A. A., Akbar, M. A., Fahmideh, M., Liang, P., Waseem, M., Ahmad, A., Niazi, M., & Abrahamsson, P. (2022). AI ethics: An empirical study on the views of practitioners and lawmakers [arXiv:2207.01493]. arXiv. https://arxiv.org/abs/2207.01493
Khatri, BB; Karki, PD. Artificial intelligence (AI) in higher education: Growing academic integrity and ethical concerns. Nepalese Journal of Development and Rural Studies; 2023; 20,
Li, M., Xie, Q., Enkhtur, A., Meng, S., Chen, L., Yamamoto, B. A., Cheng, F., & Murakami, M. (2024). A framework for developing university policies on generative AI governance: A cross-national comparative study [Preprint]. Manuscript submitted for publication in Studies in Higher Education.
Lund, B; Wang, T; Mannuru, NR; Nie, B; Shimray, S; Wang, Z. ChatGPT and a new academic reality: Artificial Intelligence-Written research papers and the ethics of the large Language models in scholarly publishing. Journal of the Association for Information Science and Technology; 2023; 74,
Maity, S. Identifying opportunities for artificial intelligence in the evolution of training and development practices. Journal of Management Development; 2019; 38,
Makarius, EE; Mukherjee, D; Fox, JD; Fox, AK. Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research; 2020; 120, pp. 262-273. [DOI: https://dx.doi.org/10.1016/j.jbusres.2020.07.045]
McGill (2023). Nvivo Guide. https://libraryguides.mcgill.ca/c.php?g=729302&p=5232394
Mumford, E. (2006). The story of socio-technical design: Reflections on its successes, failures and potential. Information Systems Journal, 16(4), 317–342. https://doi.org/10.1111/j.1365-2575.2006.00221.x
Mumtaz, S; Carmichael, J; Weiss, M; Nimon-Peters, A. Ethical use of artificial intelligence-based tools in higher education: Are future business leaders ready?. Education and Information Technologies; 2025; 30, pp. 7293-7319. [DOI: https://dx.doi.org/10.1007/s10639-024-13099-8]
Özkiziltan, D., & Hassel, A. (2021). Artificial Intelligence at Work: An Overview of the Literature SSRN. https://doi.org/10.2139/ssrn.3796746
Pastor Escuredo, D. Future of work: Ethics. SSRN; 2021; [DOI: https://dx.doi.org/10.2139/ssrn.3935330]
Pisica, AI; Edu, T; Zaharia, R; Zaharia, R. Implementing artificial intelligence in higher education: Pros and cons from the perspectives of academics. Societies; 2023; 13,
Radanliev, P., Santos, O., Brandon-Jones, A., & Joinson, A. (2024). Ethics and responsible AI deployment. Frontiers in Artificial Intelligence, 7., Article 1377011. https://doi.org/10.3389/frai.2024.1377011
Siau, K; Wang, W. Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. Journal of Database Management; 2020; 31,
Sony, M; Naik, S. Industry 4.0 integration with socio-technical systems theory: A systematic review and proposed theoretical model. Technology in Society; 2020; 61, 101248. [DOI: https://dx.doi.org/10.1016/j.techsoc.2020.101248]
Thomas, R; Bhosale, U; Shukla, K; Kapadia, A. Impact and perceived value of the revolutionary advent of artificial intelligence in research and publishing among researchers: A survey-based descriptive study. Science Editing; 2023; 10,
Toms, A., & Whitworth, S. (2022). Ethical considerations in the use of machine learning for research and statistics. International Journal of Population Data.
United Nations (2015). Transforming our world: The 2030 agenda for sustainable development. https://sdgs.un.org/2030agenda
Verboom, D. E., & Rebelo, A. D. (2025, in press). The Consequences of Large Language Models on the Development of Student’s Cognitive Abilities. In Nuno R. dos Santos, Carla S. Semedo & João Viseu (Eds.), Artificial Intelligence, Higher Education and Decent Work. Universidade de Évora, Évora. ISBN: 978-972-778-436-3.
World Economic Forum (2023). Future of Jobs Report 2023: Insight Report. https://www.weforum.org/reports/the-future-of-jobs-report-2023/
Yu, H; Nazir, S. Role of 5G and artificial intelligence for research and transformation of english situational teaching in higher studies. Mobile Information Systems; 2021; 2021,
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.