1. Introduction
Generative artificial intelligence (GenAI) is becoming an increasingly popular tool across the globe. GenAI refers to algorithms used to generate content in various forms, such as text, audio, and pictures (Kohnke et al., 2023). It has been adopted across many workplaces to automate a range of tasks quickly and efficiently (Fui-Hoon Nah et al., 2023). A survey of over 1000 university students revealed that approximately one-third were using GenAI for assistance with writing assignments (Cassidy, 2023). Many students are using GenAI to improve their learning experience, such as asking it to explain concepts for enhanced understanding (Chiu et al., 2023). Additionally, some university staff are using GenAI to assist with lesson planning and innovating their teaching practices (Baidoo-Anu and Ansah, 2023). Hence, discussions are now taking place regarding GenAI’s implications for higher education (Sullivan et al., 2023).
ChatGPT is one example of a GenAI tool that can be utilized for executing educational tasks (Baidoo-Anu and Ansah, 2023). Launched by OpenAI in November 2022, ChatGPT can generate detailed responses to text inputs (Wu et al., 2023). Research has underscored the efficacy of GenAI tools like ChatGPT, sometimes even outperforming students. For instance, in one study, ChatGPT outperformed students on a 40-question multiple-choice anatomy test (Talan and Kalınkara, 2023). These findings imply that GenAI tools can significantly alter the landscape of higher education. This has led to some apprehension among university staff, who maintain that such technology could eventually replace human educators (Chan and Tsi, 2023). Thus, the role of ChatGPT and other GenAI tools in the future of higher education could be substantial.
Currently, views on the use of GenAI in universities are conflicting. One study reported that 47% of university staff felt either somewhat or extremely comfortable with students using ChatGPT during their studies (Amani et al., 2023). Consequently, there appears to be a divide among university faculties concerning the adoption of GenAI within academic settings. Debates are ongoing about the appropriateness of ChatGPT’s role in universities, with some institutions imposing bans on its use (Dibble, 2023).
Concerns about GenAI from university staff stem from several factors: the implications for plagiarism and academic integrity (Petricini et al., 2023); the possibility that reliance on ChatGPT could hinder the development of critical thinking skills (Stepanechko and Kozub, 2023); privacy concerns linked to the utilization of GenAI tools (Amani et al., 2023); and issues of equitable access, especially concerning the disparities between paid and unpaid versions (Firat, 2023).
Rather than banning the use of GenAI at university completely, there is interest amongst staff in exploring the advantages and disadvantages of employing GenAI tools such as ChatGPT in higher education and understanding the implications of its integration (Firat, 2023). Some staff recognize the longevity and continued use of GenAI and therefore the need to understand and embrace this technology is crucial (Amani et al., 2023). Other positive staff perspectives on ChatGPT encompass potential benefits for learning, for instance, deconstructing complex topics, helping students too anxious to ask questions in person, and prompting a positive reassessment of current teaching practices (Amani et al., 2023; Firat, 2023).
However, the current literature indicates a prevalent lack of knowledge among university staff regarding GenAI and its implications for students. Furthermore, research reveals that many university educators have limited experience in using GenAI, leading to uncertainty and anxiety surrounding its implementation in higher education (Chiu et al., 2023; Petricini et al., 2023). With GenAI tools such as ChatGPT still in their early stages, the long-term consequences of using such platforms are unknown (Budhwar et al., 2023). This uncertainty leaves university faculty in search of guidance on the uses, benefits, and limitations of GenAI (Petricini et al., 2023). Understanding the perspectives and potential concerns of university staff regarding the use of GenAI in assessments can help shape pedagogical practices and influence university policies. Additionally, staff can collaborate with students to guide their use of GenAI in the university context for optimal implementation.
Much of the outlined existing research has focused on understanding staff perceptions of GenAI through methods such as focus groups, interviews, and surveys, typically featuring smaller sample sizes and specific geographical locations (Dhamija & Dhamija, 2024; Firat, 2023; Wilkinson et al., 2024). In this current research, we employed a qualitative analysis of social media posts retrieved from X (formerly Twitter) to explore higher education staff’s perspectives regarding students’ use of ChatGPT in academic settings. Analysis of social media posts has been used in a variety of contexts (Williams et al., 2013), for example, healthcare (Fu et al., 2023), mental health (Talbot et al., 2023), public health (Sleigh et al., 2021; Diddi, 2015), and education (Hadi Mogavi et al., 2021). Analyzing social media posts provides access to a geographically broader and more diverse range of opinions due to the prevalent use of social media by academics (Jordan and Weller, 2018). Social media, and Twitter/X in particular, provides a dynamic and organic space for academic discussions, making it a valuable data source for exploring staff perspectives on emerging technologies in higher education. Previous research has demonstrated that educators and academics frequently use Twitter/X for professional dialogue, networking, and sharing insights about pedagogical practices (Veletsianos, 2012). By leveraging this rich source of naturally occurring discourse, this study captures real-time reflections and discussions on GenAI, offering insights that may not emerge through traditional qualitative methods. The research questions were as follows:
What are the perspectives of higher education teaching staff on students’ use of ChatGPT in academic settings as expressed on the social media platform, X?
What approaches do higher education teaching staff propose for the appropriate use of ChatGPT by students in academic practices?
2. Methods
2.1. Study Design
A qualitative content analysis was conducted on publicly available posts (formerly known as “tweets”) from the social media site X, spanning from 1 April 2023 to 1 July 2023. University ethical approval was obtained (ref: LRS/DP-22/23-36588).
2.2. Researcher Positionality
There were four researchers: one postgraduate MSc student (MW), one undergraduate student (CH), one research assistant (SP) and their project supervisor (RU). The two student researchers conducted data collection, with all researchers contributing to data analysis and write-up of the research. The two student researchers and research assistant received training in qualitative data analysis through their degree courses and additional training from the project supervisor. The project supervisor, a university educator during a period witnessing increasing integration of GenAI in academic settings, provided a unique perspective during the data analysis phase. The student researchers, and the research assistant (who recently graduated) on the other hand, contributed a different perspective grounded in their recent experiences of completing assignments during their degree, thereby having insight into how GenAI can provide academic assistance.
2.3. Eligibility Criteria
For this analysis, only posts written or targeted towards university staff regarding ChatGPT and higher education were included, with those from or targeted towards other groups (e.g., students) being disregarded. The included posts could originate from any location globally, provided they were in the English language.
2.4. Search Strategy
The search strategy focused on #chatgpt as ChatGPT was the most widely used and popularized GenAI at the time and remains highly prevalent in academia. This provided a focused and relevant dataset directly related to the research objectives. Using a broader term like #ai could have generated many irrelevant social media posts, diluting the specificity and applicability of the findings.
The research team developed a list of relevant keywords to identify relevant posts. The final list comprised 18 search term combinations, each applied individually within X: #highereducation AND #ChatGPT; #highereducation AND ChatGPT; “higher education” AND #ChatGPT; “higher education” AND ChatGPT; #HigherEd AND #ChatGPT; #HigherEd AND ChatGPT; HigherEd AND #ChatGPT; HigherEd AND ChatGPT; #academictwitter AND #ChatGPT; #academictwitter AND ChatGPT; #AcademicChatter AND #ChatGPT; #AcademicChatter AND ChatGPT; #teachertwitter AND #ChatGPT; #teach-ertwitter AND ChatGPT; #University AND #ChatGPT; #University AND ChatGPT; University AND #ChatGPT; University AND ChatGPT.
2.5. Selection Process
Two researchers (MW, CH) conducted searches by hand in X and exported posts into Microsoft Excel. Duplicate posts were not exported from X. Searches generated independent 703 posts (Figure 1). After conducting each search, both researchers manually screened posts to identify those matching the eligibility criteria. Any uncertainty regarding inclusion was resolved by a third researcher (RU). After exclusion criteria were applied (n = 502), 201 posts remained.
2.6. Data Collection and Data Items
Following screening, 201 posts met the inclusion criteria. Data, manually extracted into Microsoft Excel by MW and CH, included the search term combination, post content, and links to external articles. After the initial analysis, to further enrich the dataset, SP conducted a detailed manual extraction of additional user activity data on X, such as post type (initial post or reply) and user identifiers (to determine repeat posters). This facilitated an understanding of engagement patterns and distribution within the dataset. Researchers collected only publicly available information in compliance with X’s terms of service, ensuring that no personally identifiable information was retrieved (X, 2023).
The data was retrieved using this manual method instead of retrieval via the X Platform’s application programming interface (API) due to the paid nature of the services for retrieving the specific data required for the present study (X, 2024a). Additionally, the study period coincided with changes in X’s terms and conditions. Therefore, manual data collection was employed to prevent any alterations in data structure or availability and maintain the dataset’s consistency and reliability (X, 2024b).
2.7. Data Analysis
Data analysis was conducted using Microsoft Excel. The dataset is available on Figshare (
Interrater reliability was assessed using Cohen’s Kappa to evaluate the agreement between two coders (MW, CH) on the initial set of codes. The Kappa coefficient was 0.88, indicating strong agreement between the coders (McHugh, 2012). Discrepancies were resolved through discussion with a third researcher (RU). All three researchers (MW, CH, RU) reached consensus to verify inclusion criteria and agree on the initial coding and broader categories before proceeding to the next stage.
Further inductive coding was employed within each of the three broad categories—opinion, advice-seeking, and resource-sharing—to capture more detailed insights. Each researcher was responsible for coding a specific category: MW coded advice-seeking posts to identify specific challenges or concerns; RU refined the coding of opinion posts to explore the particular aspects of ChatGPT that users found favorable or unfavorable; and CH coded resource-sharing posts by exploring the nature and purpose of the shared resources. The research team then reviewed the refined codes across all three categories, resolved any discrepancies in interpretations, and modified codes where necessary.
Themes and subthemes were generated collaboratively by the research team through iterative discussion and review of the refined codes. These overarching themes allowed for a more nuanced and comprehensive understanding of the data, integrating patterns across the three categories. Themes were clearly defined and named, and quotes (i.e., posts) were selected to represent these themes and sub-themes across the dataset.
3. Results
This study explored university staff’s multifaceted perspectives on the role of GenAI in higher education. The initial theme, “Perceptions of GenAI impact on higher education and skepticism towards its management”, had three subthemes: “Profession threat” (n = 22); “GenAI’s learning impact” (n = 21); and “Institutional trust and response” (n = 22). Conversely, the subsequent theme, “GenAI in assessment: prevention and detection approaches”, contained two subthemes: “Reactive assessment approaches” (n = 16) and “Debating GenAI detection” (n = 30). A future-oriented lens was adopted in the final theme, “Future-focused approaches to GenAI-enhanced learning and assessment”, with three subthemes: “Future perspectives” (n = 27); “GenAI integration and learning optimization” (n = 31); and “Guidance and policy development” (n = 25) (Table 1).
3.1. Perceptions of GenAI Impact on Higher Education and Scepticism Towards Its Management
3.1.1. Profession Threat
There were many posts from university staff expressing negative opinions towards the use of GenAI at university. Many users expressed concerns about GenAI posing a threat to the academic profession. Some users had stronger opinions, asserting that GenAI could replace and damage the reputation of university staff, and by extension, the universities themselves. One user painted a more specific picture, envisioning universities evolving into patterns of AI-generated assignments, sub-missions, and grading.
“Well ChatGPT just replaced university professor’s, and also replaced university itself. #liberalarts #university #AI #ChatGPT #artificalintelligence”
“Higher education is going to be hilarious. For most part: ChatGPT generated assignments, ChatGPT generated submissions, ChatGPT generated grading. Timepass maximum.”
Conversely, others were more sceptical, contending that GenAI should not have the power to undermine higher education. Further, some staff members communicated their apprehensions regarding the oversimplified view of teaching responsibilities, arguing that several teaching tasks cannot feasibly be outsourced to GenAI.
“Really depressing how many people in higher education are saying things like we need to “work with” stuff like ChatGPT—how f***ing silly and naive can you get. It’s a computer program, not God”
Some users questioned the rationale behind integrating GenAI in higher education, wondering, for instance, why it is advocated despite its limitations, including inaccuracies and ethical and logistical challenges. One user thought that advocating for AI was immoral.
“So if is not accurate, knowledgeable or “intelligent” I don’t understand why so many institutions (universities, research centers, ministries, etc.) and as well individuals (researchers, teachers, etc.) are discussing chatGPT and pedagogy and how to integrate it in education. Why?”
One explanation for the previously mentioned negative reactions to GenAI could be a lack of knowledge and understanding of such technologies, as alluded to by one user:
“Talking to my 11yo about #chatGPT last night and reasons why I didn’t think he should use it. … Wondering whether my concerns come from an academic perspective or just from my lack of understanding. I think I need to increase my knowledge in order to support him with it’s potential.”
3.1.2. GenAI’s Learning Impact
Some conveyed apprehension about GenAI’s ability to write coherent essays or accurately explain conceptual ideas accurately. Some users expressed critiques directed at students who are dependent on GenAI, contending that such reliance fundamentally undermines the essence of university education. These users underscored the imperative for students to engage more profoundly in their learning. Some resources, shared via posts, raised concerns over skill development and opportunities missed to develop critical thinking and problem-solving skills.
“According to this guy’s argument, any chatgpt essay on the topic would be a perfect dissertation.”
“Hot Take. if you need ChatGPT or any other “fake” “A.I.” to write your papers, you need to drop out of University. It’s not for you. Leave higher education to those who actually want to think.”
Extending on this perspective of how GenAI could disrupt learning and higher education, one staff member went so far as to advocate for self-learning via GenAI instead of investing money in higher education.
“Take this from someone who has spent almost $100k on ‘higher education’… Unless you want to be a doctor or a lawyer or something that demands a specific degree, you are MUCH better off with self-education. Use #youtube #chatgpt #books #mentors to learn.”
However, other university staff emphasized the role of GenAI in facilitating students’ learning experiences. A few posts underscored the significance of GenAI as a facilitator of student learning, advocating for the use of GenAI in teaching to enhance learning without replacing higher-order thinking tasks, such as critical thinking. Numerous resources shared via posts also illustrate that GenAI can foster student creativity and personalize learning experiences.
“[I]t is crucial to use ChatGPT ethically and responsibly, ensuring that it serves as a facilitator for learning rather than a replacement for critical thinking or a means to simply boost efficiency.” #Academia #AcademicChatter #epitwitter #AcademicTwitter
3.1.3. Institutional Trust and Response
There were some posts where staff were confused about how their universities or universities, in general, were handling the rising use of GenAI in higher education. Some suggested universities were saying one thing (i.e., expressing the positives of GenAI) and in reality, doing another (such as banning its use). This extended to skepticism around universities adequately addressing the challenge of adjusting assessments in response to increased GenAI use.
“[university] President claims that ChatGPT ‘may be one of the best things to happen to Universities in a long time...’ ChatGPT is now banned on [institution name] University wifi.”
“I think universities need to be taking a serious look at their current assessment models because I don’t think they are sustainable in a world with ChatGPT. Unfortunately I don’t see many signs they are doing the hard thinking necessary.”
Other posts conveyed opinions that universities are simply not acting in a timely manner to prepare for a higher education environment with GenAI. Other posts were more inquisitive, asking others what conversations were happening at their university, regarding the handling of GenAI.
“As fast at #AI is evolving, #university can’t just sit and watch
“Yes, #ChatGPT/generative #AI is impacting universities. But debates are super insular (at least in the US and Germany). How are these conversations taking shape in other parts of the world? Eager for your insights. #High-erEd @OpenAcademics @AcademicChatter @GlobalYAcademy”
Several posts argued that institutions should be proactive in providing guidelines, training, and advice to students and staff so that GenAI can be integrated alongside teaching and assessment. Meanwhile, a different user expressed impatience with universities for not having already implemented guidelines around the incorporation of GenAI in higher education.
“Why are so many universities taking a ““wait-and-see”” approach with ChatGPT? It’s time already to come up with guidelines. #education #ai #university #students”
3.2. GenAI in Assessment: Prevention and Detection Approaches
3.2.1. Reactive Assessment Approaches
Students using GenAI to cheat on assignments have become a concern for many university staff members. Several posts sought guidance and shared resources on preventing students from utilizing GenAI in assignments due to issues with academic integrity. One user posted an article focusing on maintaining academic integrity and exploring possible ways to prevent dishonesty. The paper concludes by providing several strategies to achieve this through policies and procedures, training, and the use of detection tools.
“I had my first experience with #ChatGPT cheating in my intro stats class. This program is wild and will definitely have huge implications for #HigherEd. Cheating just leveled up in a big big way.”
“As we ease into finals, how is AI (ChatGPT) showing up in student work and how are schools/educators approaching it? I’m having a hard time navigating it as a powerful tool and also what it means for me to evaluate “student” work submitted in this way? #AcademicTwitter”
“AI chatbots and academic ethics is one of the hottest topics in #highered and #edtech. Here’s an article that you might find useful! Chatting and cheat-ing: Ensuring academic integrity in the era of ChatGPT”
Some posts asked which types of assessments are less susceptible to GenAI use. However, the effort required to modify assessments was a concern for many users. Some staff reactions were to revert to invigilated in-person exams, supported by posted resources outlining alternative assessment methods. These alternatives, such as oral examinations, group presentations, and reflections, would prevent students from relying on ChatGPT to complete their work. However, some posts suggested that the issues now being discussed in the context of GenAI use are actually highlighting pre-existing problems within higher education, such as non-inclusive assessments and practices that fail to support deeper learning approaches or higher-order thinking.
“University educators: how are you dealing with Chat GPT in your first year classes? What kinds of assignments are you creating? How are you adjusting your teaching?”
“I don’t understand why Universities don’t just switch to in person tests and use at home assignments as a voluntary exercise. That way, there’d be little incentive to cheat—you cannot use ChatGPT during the exam…”
“Universities are freaking out over the use of chatGPT. In reality, this highlights the fact that most work is “busy work” and doesn’t incentivize critical thought.”
3.2.2. Debating GenAI Detection
Some staff seemed to endorse the GenAI detection of students’ work, emphasizing its importance in enabling accurate assessment of student’s work and learning. Advice in detecting AI-generated assignments also emerged frequently. However, other posts highlighted the unreliability and inaccuracies of some AI-detection systems, and others asked how others are resolving this issue.
“Too right we have access to tools to determine students’ own-words from copied ideas without citations, proper acknowledgements in their written assignments. #HigherEd #plagiarism #ChatGPT #AI”
“You can now bypass the AI detecting tools used by schools and universi-ties. I feel sorry for teachers and lecturers having to face this challenge. #AI #ChatGPT #homework”
“How do you handle situations where you strongly suspect students are using ChatGPT to write their papers? Is there a way to check this, like a
Other staff expressed opposition to GenAI detection. The ethics of GenAI detection were called into question by one user, who, along with another, advocated for a broader discussion on the subject within universities, beyond the mere focus on detection. A post from another staff member emphasized the need to refrain from laying the blame on students and presuming an inherent desire to cheat.
“And of course, when it comes to addressing ChatGPT in higher education, there are other things we can do apart from detection!”
“Another chance to point out that students (collectively) are not out to cheat the system. #HigherEducation #ChatGPT #AIinED @UBC”
3.3. Future-Focused Approaches to GenAI-Enhanced Learning and Assessment
3.3.1. Future Perspectives
One post shared a paper contending that higher education institutions should embrace the challenges that GenAI is bringing and use it to deepen and broaden their knowledge and understanding. One user, reflecting on GenAI’s transformative influence in the workforce, predicted ground-breaking changes in the higher education sector. Supporting this, numerous posts emphasized the importance of embracing GenAI as a tool in higher education. Some users argued that GenAI will continue to exist, much like the internet; therefore, integrating GenAI into higher education is necessary. One user cautioned that universities risk adversely impacting admission rates if they ban such technologies.
“How can the curriculum embrace AI and new technology
“Educators should not give into the moral panic surrounding generative #AI. Instead, they should view #ChatGPT and other models as an opportunity to innovate the field of higher education…”
“#Universities that #ban #ChatGPT may be #hurting their own admissions, according to a #study
There were also many positive opinions shared by university staff about using GenAI at university in the future. Regarding the impact on their work, staff noted how GenAI could expedite various tasks, such as generating teaching materials and facilitating research processes. Additionally, other users outlined how GenAI can resolve issues in higher education, including swiftly addressing student practical/administrative queries and potentially accelerating marking in the future. Numerous resources shared via X demonstrated that GenAI could enhance teaching by aiding in planning course content and assisting with the grading and feedback of assignments.
“Is #ChatGPT here to stay in higher education? College instructors are becoming dependent on ChatGPT, using it to plan lessons and give feedback to students about their work. These instructors perceive ChatGPT, as their students likely do, as a timesaver.”
“Idea for ChatGPT: Train it on all of the syllabus at a university to create a virtual counselor for basic course questions.”
-
Find relevant courses from queries
-
Suggest plans for completing a degree
-
Provide information on deadlines
-
Help with enrollment process”
“Are educators using ChatGPT to write lesson plans?
3.3.2. GenAI Integration and Learning Optimization
Several posts suggested that the rise of GenAI should be seized as an opportunity to innovate within the higher education sector including supporting higher-order thinking tasks and integrating GenAI, such as prompting students to critically evaluate its outputs. Another user suggested embedding GenAI literacy into the curriculum.
“It’s really funny to watch schools and universities banning ChatGPT; they rather have kids memorize information that’s a commodity than start people teaching critical thinking of inputting into AI products.”
“to avoid reactionary response, education systems should update digital literacy programs and include AI literacy. #AI #ChatGPT #Artificial-Intelligence #educacion #universities #schools.”
Some staff were seeking advice on how to integrate GenAI into their university teaching, including more creative ways to deliver the university curriculum using GenAI and how to incorporate GenAI into assessments. Staff shared resources, which included tips, strategies, and plans for integrating GenAI into teaching and adapting institutional practices to use GenAI. This included guidance on effective prompts for using GenAI tools, such as ChatGPT, and its applications in teaching.
“#AcademicTwitter #AcademicChatter looking forward to fall semester, I’m wondering if anyone has thought of creative ways to use #ChatGPT for good instead of evil? ie how can we see it as a useful tool in our classrooms instead of something to panic about?”
“Using AI to make teaching easier and more impactful: five strategies and prompts that work By @emollick “Despite decades of hype from VCR classes to Massive Online Courses, technology has not replaced teaching.”
“
Some staff suggested students could use GenAI to assist with various stages of writing, from editing their work to receiving feedback. Another user suggested integrating GenAI into the curriculum and having a reduction in written word assignments in favor of discussions to enhance learning.
“#ChatGPT aside, this approach could be beneficial for #teaching stages of #writing, including drafting, editing, rewriting, etc! Allowing #students to see our own drafts in this way might also demystifying the writing pro-cess! #teachertwitter #highereducation #ELT”
“At [institution name] University, we are trying to instill similar under-standing about the responsible use of ChatGPT. This may need new teaching and learning philosophy with less write ups and more discussions. Will take sometime for teachers who resist new ideas, but it needs to change.”
However, other posts conveyed significant concerns about the additional time and learning required to assimilate GenAI into current teaching methodologies at the university level.
“I wonder how many hours those of us who teach at universities have sunk into our new roles as ChatGPT Sherlock Holmeses.”
3.3.3. Guidance and Policy Development
Some posts sought to understand how to advise and guide students on correct GenAI use. One post requested guidance on making students aware of the issues and biases accompanying GenAI use and determining appropriate usage times. Another inquired about university staff’s experiences in responding to students’ use of GenAI. Other posts took a broader approach, seeking advice on crafting policies to guide GenAI use, to which students could refer.
“Any colleges/universities writing a policy for how ChatGPT may be used by students? Or is your department doing so? Or are we all going rogue with our expectations? #AcademicChatter”
“Love the “find the biases” suggestion! “How can we guide our students to not only recognize these blind spots but also incorporate a greater multiplicity of viewpoints in their scholarship?” #chatgpt #edutwitter #edchat #educoachchat #educoach #highered”
“Would love to hear accounts from professors, educators, and students about your university’s response or lack thereof to ChatGPT/AI text-generation technology in coursework: how students have been using it already, how you’ve been trying to respond, etc. DMs open!”
A guide from the University of Rhode Island, shared in a post, details using ChatGPT for individualized learning in professional development and higher education. It offers advice on effective prompts and additional resources to help teachers and students explore and utilize ChatGPT.
4. Discussion
This study undertook an analysis of posts from social media platform, X, to explore higher education staff’s perspectives on students’ use of GenAI in an academic setting. Several critical themes and sentiments were generated from the analysis. The perception of GenAI as a potential threat or replacement to the teaching profession was evident, echoing concerns voiced in recent studies (Chan and Tsi, 2023). Staff expressed apprehensions about the broader impact on student learning and the wider higher education ecosystem, aligning with sentiments of reduced human involvement in various professions due to AI advancements (Bessen, 2017).
Posts revealed mixed opinions about GenAI’s effects on learning. Whilst some feared GenAI could weaken skills and critical thinking, similar to past concerns about technology hindering innate learning (Daniel, 2015), others see it as a tool to enhance creativity and personalize education, echoing prior studies on GenAI’s educational potential (Firat, 2023). Therefore, it is crucial to carefully implement GenAI to maximize its benefits and avoid harmful dependencies.
Our study also highlighted a need for improved GenAI literacy among staff, a concern previously highlighted in the broader context of technology in education (Iqbal et al., 2022). There is a clear call for more extensive training, support, and resources to ensure that university staff are equipped to navigate and utilize AI tools competently (Lo, 2023). However, many posts expressed distrust in their institution’s current response to GenAI.
Yet, amidst the apprehensions, there were optimistic sentiments. GenAI’s integration into higher education could foster profound dialogues about education’s evolution, especially in enhancing higher-order thinking tasks like critical thinking, which are regarded as crucial for the future workforce (Chiu et al., 2023).
Our analysis revealed the rapid evolution of GenAI necessitates prompt action by universities (Kelly et al., 2023). Establishing guidelines for GenAI’s ethical use in assessments is also important (Rajabi et al., 2023). Furthermore, with many assessments susceptible to AI automation, a push towards potentially non-inclusive practices such as in-person exams mirrors concerns raised in prior research (Smolansky et al., 2023; Upsher et al., 2024). More inclusive alternatives, including embedding GenAI and critically evaluating its abilities, could be a favorable option (Acar, 2023; Upsher et al., 2024).
Detection of GenAI remains contentious, with prior studies echoing the debate between its value versus the potential for fostering mistrust between staff and students (Dobrin, 2023).
The underfunding in the education sector and the potential benefits of GenAI in administrative tasks have been previously documented (Chiu, 2023; Muranga et al., 2023), reinforcing a potential need to embrace GenAI more comprehensively. However, caution should be taken due to concerns around the pragmatic integration of GenAI into pedagogical practices without inundating staff or diluting academic quality.
4.1. Strengths and Limitations
There is limited prior work on university staff’s viewpoints on GenAI, underscoring the novelty and relevance of this study.
Our analysis enabled us to capture a moderate sample of 194 posts, not limited to a specific geographical area. By focusing on English-language posts, we ensured consistent linguistic analysis, offering a concentrated view of prevailing sentiments.
However, a limitation of this study is the lack of contextual and disciplinary information, such as the country of origin or professional background of the users. This absence limits the ability to assess the representativeness of the dataset and may influence the interpretation of findings, as perspectives may vary across cultural and disciplinary contexts. Additionally, restricting the dataset to English-language posts may have excluded valuable insights from non-English-speaking university staff, whose perspectives on GenAI could differ based on regional policies, institutional norms, and broader sociocultural attitudes toward AI in education.
Another limitation of this study is the short timeframe of data collection (April–July 2023), which, while capturing timely insights into staff perceptions of ChatGPT, may not reflect longer-term or evolving attitudes. Given the rapid advancements in generative AI and its increasing integration into educational contexts, staff perceptions are likely to shift as institutions develop clearer policies, guidance, and practical applications for these tools. This temporal scope, combined with language constraints, may have resulted in an unrepresentative dataset, as discourse during this period could differ from broader staff perspectives.
4.2. Recommendations for Policy and Practice
The findings of this study highlight several actionable steps that universities can take to address the challenges and opportunities associated with the integration of GenAI in higher education. First, institutions should prioritize the development of clear and comprehensive policies that define the ethical use of GenAI. These policies should be co-created with input from staff and students to build trust and reflect the practical challenges faced by educators. Key areas for inclusion include academic integrity, equitable access to GenAI tools, and data privacy. Transparent communication of these policies across the institution is essential to ensure widespread understanding and adherence.
In addition to policy development, universities should implement targeted training programmes to improve GenAI literacy among staff and students. For staff, such training should focus on practical applications of GenAI in teaching and assessment, helping them to integrate these tools effectively while understanding their limitations. For students, training should emphasize the responsible use of GenAI and the critical evaluation of AI-generated outputs, encouraging a balanced approach to its adoption.
Redesigning assessments to reduce dependence on GenAI while maintaining inclusivity and academic integrity is also essential. Innovative assessment methods, such as open-book exams, oral assessments, and reflective essays, can encourage genuine student engagement and the development of higher-order thinking skills. Furthermore, establishing mechanisms to regularly collect feedback from staff and students will provide valuable insights into their experiences with GenAI, allowing institutions to refine policies and practices iteratively.
Finally, universities should invest in robust support structures to ensure that staff feel adequately equipped to navigate the integration of GenAI. This includes providing dedicated resources, ongoing professional development opportunities, and accessible support teams. By adopting these measures, universities can create an environment that maximizes the potential benefits of GenAI while addressing the challenges it presents, ensuring its integration is ethical, effective, and aligned with academic values.
4.3. Future Research
Our analysis has paved the way for deeper explorations into the integration of GenAI in student assessments from the perspective of university staff. To gain more contextual and disciplinary information, future research could employ methods that allow for the inclusion of such metadata, such as surveys or interviews with platform users. Additionally, integrating data from multiple social media platforms, such as LinkedIn or academic forums, could provide a broader and more diverse range of perspectives, as different platforms attract distinct user demographics and professional communities. Combining social media analysis with other qualitative and quantitative methods, such as follow-up surveys or interviews, could provide deeper insights into how perceptions evolve in response to policy shifts, pedagogical developments, and real-world experiences with ChatGPT in educational settings. A multi-method approach incorporating diverse data sources would enhance representativeness and generalisability while offering a more nuanced and comprehensive understanding of the ongoing integration of AI in higher education.
Future research should involve collaboration between students and staff as this is crucial in ensuring that GenAI is integrated in a meaningful and effective manner that addresses both pedagogical goals and student needs.
Expanding the scope of the study to include posts in other languages or from different global regions would offer a richer, cross-cultural understanding of the topic. This could be achieved by incorporating multilingual data collection and analysis, leveraging machine translation tools or involving researchers fluent in other languages to ensure accuracy and cultural sensitivity.
Additionally, a longitudinal approach, tracing posts over a longer duration, can provide insights into how staff attitudes change over time as institutions and educators gain more experience with GenAI.
Lastly, understanding the training and support structures available to staff working with GenAI tools can reveal if they feel adequately prepared and suggest areas for further development. Such extended research can provide a more holistic view and address the multi-faceted implications of incorporating GenAI in educational assessments.
4.4. Conclusions
This study provides novel insights into university staff’s perceptions of ChatGPT in academic settings by analyzing naturally occurring discussions on X (formerly Twitter). Unlike previous research that relies on interviews or surveys, this study captures real-time, unsolicited perspectives, offering a unique window into how higher education professionals discuss, debate, and engage with GenAI in practice.
Our findings contribute to ongoing discussions about the future of AI in education, institutional policy-making and pedagogical adaptation. We identified three key themes: concerns about GenAI’s potential impact on academic integrity, staff roles, and student learning; debates about AI in assessments, including prevention, detection, and rethinking academic tasks; and forward-looking perspectives on integrating GenAI meaningfully into learning and assessment. These insights are timely and critical, given the rapid development and adoption of AI in higher education.
This study also underscores the urgent need for AI literacy training and institutional guidance as many staff members expressed uncertainty about how to navigate GenAI’s challenges and opportunities. Universities must move beyond reactive measures and instead establish clear, evidence-based policies that support both staff and students in the responsible use of AI tools. Ensuring that these policies and practices are inclusive and equitable is particularly important, so that AI integration does not exacerbate existing disparities in access to education or support structures.
As GenAI continues to shape education, universities must engage proactively in these discussions, implement informed strategies, and ensure AI integration aligns with pedagogical goals, ethical standards, and principles of inclusivity. By providing empirical evidence of current staff concerns and aspirations, this study offers a foundation for institutions to develop AI policies and teaching practices that are responsive to educators’ needs. This study lays the groundwork for further interdisciplinary research, policy development, and staff-student collaboration in navigating AI’s role in higher education.
Conceptualization, M.W., C.H. and R.U.; methodology, M.W., C.H., S.P. and R.U.; formal analysis, M.W., C.H. and R.U.; writing—original draft preparation, M.W., C.H. and R.U.; writing—review and editing, M.W., C.H., S.P. and R.U.; supervision, R.U. All authors have read and agreed to the published version of the manuscript.
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Ethics Committee of King’s College London (ref: LRS/DP-22/23-36588; approved 04 April 2023).
Participant consent was waived as the study involved the analysis of publicly available posts on Twitter (now X), where users share content in a public domain. No private or personally identifiable data beyond what is publicly accessible was collected, and the study adhered to ethical guidelines for research using social media data.
The dataset is available on Figshare: (
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Flow diagram of screening and inclusion of posts from X in the final analysis.
Themes and subthemes from university staff’s perceptions of generative AI.
Themes | Sub-Themes | Frequency (Number of Posts) |
---|---|---|
Perceptions of GenAI’s Impact on Higher Education and Skepticism towards its Management | a. Profession threat | 22 |
b. GenAI’s learning impact | 21 | |
c. Institutional trust and response | 22 | |
GenAI in Assessment: Prevention and Detection Approaches | a. Reactive assessment approaches | 16 |
b. Debating GenAI detection | 30 | |
Future-Focused Approaches to GenAI-Enhanced Learning and Assessment | a. Future perspectives | 27 |
b. GenAI integration and learning optimization | 31 | |
c. Guidance and policy development | 25 |
References
Acar, O. A. Are your students ready for AI? A 4-Step framework to prepare learners for a ChatGPT world; Harvard Business Publishing: 15 June 2023; Available online: https://hbsp.harvard.edu/inspiring-minds/are-your-students-ready-for-ai (accessed on 10 March 2025).
Amani, S.; White, L.; Balart, T.; Arora, L.; Shryock, K. J.; Brumbelow, K.; Watson, K. L. Generative AI perceptions: A survey to measure the perceptions of faculty, staff, and students on generative AI tools in academia. arXiv; 2023; arXiv: 2304.14415
Baidoo-Anu, D.; Ansah, L. O. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN Electronic Journal; 2023; 7,
Bessen, J. E. AI and jobs: The role of demand; National Bureau of Economic Research: 2017; Available online: https://www.nber.org/papers/w24235 (accessed on 10 March 2025).
Budhwar, P.; Chowdhury, S.; Wood, G.; Aguinis, H.; Bamber, G. J.; Beltran, J. R.; Boselie, P.; Lee Cooke, F.; Decker, S.; DeNisi, A.; Dey, P. K.; Guest, D.; Knoblich, A. J.; Malik, A.; Paauwe, J.; Papagiannidis, S.; Patel, C.; Pereira, V.; Ren, S. … Varma, A. Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT. Human Resource Management Journal; 2023; 33,
Cassidy, C. Lecturer detects bot use in one-fifth of assessments as concerns mount over AI in exams. The Guardian; 16 January 2023; Available online: https://www.theguardian.com/australia-news/2023/jan/17/lecturer-detects-bot-use-in-one-fifth-of-assessments-as-concerns-mount-over-ai-in-exams (accessed on 10 March 2025).
Chan, C. K. Y.; Tsi, L. H. Y. The AI revolution in education: Will AI replace or assist teachers in higher education?. arXiv; 2023; arXiv: 2305.01185
Chiu, T. K. F. The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learning Environments; 2023; 32,
Chiu, T. K. F.; Xia, Q.; Zhou, X.; Chai, C. S.; Cheng, M. Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence; 2023; 4, 100118. [DOI: https://dx.doi.org/10.1016/j.caeai.2022.100118]
Daniel, B. Big Data and analytics in higher education: Opportunities and challenges: The value of big data in higher education. British Journal of Educational Technology: Journal of the Council for Educational Technology; 2015; 46,
Dhamija, A.; Dhamija, D. Understanding teachers’ perspectives on ChatGPT-generated assignments in higher education. Journal of Interdisciplinary Studies in Education; 2024; 14,
Dibble, M. Schools ban ChatGPT amid fears of artificial intelligence-assisted cheating; Voice of American News: 10 February 2023; Available online: https://www.voanews.com/a/schools-ban-chatgpt-amid-fears-of-artificial-intelligence-assisted-cheating-/6958125.html (accessed on 10 March 2025).
Diddi, P. Organizational twitter use: A qualitative analysis of tweets during breast cancer awareness month; Louisiana State University and Agricultural & Mechanical College: 2015; [DOI: https://dx.doi.org/10.31390/gradschool_theses.4171]
Dobrin, S. I. Talking about generative AI: A guide for educators; 1st ed. Broadview Press: 2023; Available online: https://broadviewpress.com/product/talking-generative-ai/#tab-description (accessed on 10 March 2025).
Firat, M. What ChatGPT means for universities: Perceptions of scholars and students. Journal of Applied Learning & Teaching; 2023; 6,
Fu, J.; Li, C.; Zhou, C.; Li, W.; Lai, J.; Deng, S.; Zhang, Y.; Guo, Z.; Wu, Y. Methods for analyzing the contents of social media for health care: Scoping review. Journal of Medical Internet Research; 2023; 25, e43349. [DOI: https://dx.doi.org/10.2196/43349]
Fui-Hoon Nah, F.; Zheng, R.; Cai, J.; Siau, K.; Chen, L. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research; 2023; 25,
Hadi Mogavi, R.; Zhao, Y.; Ul Haq, E.; Hui, P.; Ma, X. Student barriers to active learning in synchronous online classes: Characterization, reflections, and suggestions. Eighth ACM Conference on Learning @ Scale; Virtual Event, Germany, June 22–25; 2021.
Iqbal, N.; Ahmed, H.; Azhar, K. A. Exploring teachers’ attitudes towards using ChatGPT. Global Journal for Management and Administrative Sciences; 2022; 3,
Jordan, K.; Weller, M. Academics and social networking sites: Benefits, problems and tensions in professional engagement with online networking. Journal of Interactive Media in Education; 2018; 2018,
Kelly, A.; Sullivan, M.; Strampel, K. Generative artificial intelligence: University student awareness, experience, and confidence in use across disciplines. Journal of University Teaching & Learning Practice; 2023; 20,
Kohnke, L.; Moorhouse, B. L.; Zou, D. Exploring generative artificial intelligence preparedness among university language instructors: A case study. Computers and Education: Artificial Intelligence; 2023; 5, 100156. [DOI: https://dx.doi.org/10.1016/j.caeai.2023.100156]
Lo, C. K. What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences; 2023; 13,
McHugh, M. L. Interrater reliability: The kappa statistic. Biochemia Medica; 2012; 22,
Muranga, K.; Muse, I. S.; Köroğlu, E. N.; Yildirim, Y. Artificial Intelligence and underfunded education. London Journal of Social Sciences; 2023; 6, pp. 56-68. [DOI: https://dx.doi.org/10.31039/ljss.2023.6.105]
Petricini, T.; Wu, C.; Zipf, S. T. Perceptions about generative AI and ChatGPT use by faculty and college students. Transformative Dialogues: Teaching and Learning Journal; 2023; 17,
Rajabi, P.; Taghipour, P.; Cukierman, D.; Doleck, T. Exploring ChatGPT’s impact on post-secondary education: A qualitative study. 25th Western Canadian Conference on Computing Education; Vancouver, BC, Canada, May 4–5; 2023.
Sleigh, J.; Amann, J.; Schneider, M.; Vayena, E. Qualitative analysis of visual risk communication on twitter during the Covid-19 pandemic. BMC Public Health; 2021; 21,
Smolansky, A.; Cram, A.; Raduescu, C.; Zeivots, S.; Huber, E.; Kizilcec, R. F. Educator and student perspectives on the impact of generative AI on assessments in higher education. Tenth ACM Conference on Learning @ Scale; Copenhagen, Denmark, July 20–22; 2023; [DOI: https://dx.doi.org/10.1145/3573051.3596191]
Stepanechko, O.; Kozub, L. English teachers’ concerns about the ethical use of ChatGPT by university students; Grail of Science: 2023; [DOI: https://dx.doi.org/10.36074/grail-of-science.17.03.2023.051]
Sullivan, M.; Kelly, A.; McLaughlan, P. ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning & Teaching; 2023; 6,
Talan, T.; Kalınkara, Y. The role of artificial intelligence in higher education: ChatGPT assessment for anatomy course. International Journal of Management Information Systems and Computer Science; 2023; 7,
Talbot, A.; Ford, T.; Ryan, S.; Mahtani, K. R.; Albury, C. #TreatmentResistantDepression: A qualitative content analysis of Tweets about difficult-to-treat depression. Health Expectations; 2023; 26,
Upsher, R.; Heard, C.; Yalcintas, S.; Pearson, J.; Findon, J. L. Beckingham, S.; Lawrence, J.; Powell, S.; Hartley, P. Embracing generative AI in authentic assessment; challenges, ethics and opportunities. Using generative AI effectively in higher education; Routledge Member of the Taylor and Francis Group: 2024.
Veletsianos, G. Higher education scholars’ participation and practices on Twitter: Scholars’ participation and practices on Twitter. Journal of Computer Assisted Learning; 2012; 28,
Wilkinson, C.; Oppert, M.; Owen, M. Investigating academics’ attitudes towards ChatGPT: A qualitative study. Australasian Journal of Educational Technology; 2024; 40,
Williams, S. A.; Terras, M. M.; Warwick, C. What people study when they study Twitter: Classifying Twitter related academic papers. Journal of Documentation; 2013; 69,
Wu, T.; He, S.; Liu, J.; Sun, S.; Liu, K.; Han, Q.-L.; Tang, Y. A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica; 2023; 10,
X. X API pricing; X: 2023; Available online: https://developer.x.com/en/pricingX (accessed on 10 March 2025).
X. Developer agreement and policy; 2024a; Available online: https://developer.x.com/en/developer-terms/agreement-and-policy (accessed on 10 March 2025).
X. X terms of service; X.Com. twitter-com 2024b; Available online: https://x.com/en/tos (accessed on 10 March 2025).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
This study aimed to understand university staff’s perspectives and approaches regarding students’ use of generative artificial intelligence (GenAI) in an academic setting. Currently, there is a lack of social media analyses exploring this area. For the present study, a qualitative content analysis was conducted on posts about ChatGPT shared via X (formerly Twitter). This enabled a sample of n = 194 perspectives to be captured. Three main themes were generated: (1) perceptions of GenAI’s impact on higher education and skepticism towards its management; (2) GenAI in assessment: prevention and detection approaches; and (3) future-focused approaches to GenAI-enhanced learning and assessment. Some university staff see GenAI as a threat to their profession and have stressed the need for university guidance. Staff discussed both the positive and negative impacts of GenAI on student learning. Some staff want to prevent the use of GenAI in assessments, whilst others embrace the tool. These findings can inform university guidance on the use of GenAI in the future.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 South London and Maudsley NHS Foundation Trust, London SE5 8AZ, UK;
2 Department of Psychology, Sport and Geography, University of Hertfordshire, Hertfordshire AL10 9AB, UK
3 Department of Psychology, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London WC2R 2LS, UK;