INTRODUCTION
The need for AI literacy in higher education, particularly for using generative AI in research, is increasingly critical as AI technologies become more integrated into academic environments. AI literacy encompasses understanding AI concepts, data literacy, and the ethical implications of AI use, which are essential for effectively leveraging AI tools in research and education (Baek & Shin, 2021; Schüller, 2022). The rapid development of AI technologies, such as ChatGPT and other generative models, has prompted discussions on their potential to disrupt traditional educational practices and the necessity for academic policies to ensure ethical use and maintain academic integrity (Bennett, 2023; Kovachоv & Suchikova, 2023). AI literacy is not only about understanding the technical aspects but also involves developing critical thinking and information evaluation skills to navigate the challenges posed by AI, such as the generation of false information and ethical concerns (Kovachоv & Suchikova, 2023). In higher education, AI can enhance various operations, including student admissions, library management, and personalized learning, thus necessitating a foundational understanding of AI to optimize these processes (Kwasi & Halil, 2019). Moreover, AI literacy is crucial for students and researchers to effectively use AI-based tools like GANs, which can generate synthetic data for educational research, thereby improving data quality and research outcomes (Bethencourt-Aguilar et al., 2023). The integration of AI in higher education also requires a shift in teaching and learning paradigms, where AI can personalize education, automate administrative tasks, and provide immediate feedback, thus preparing students for future careers in a data-driven world (Slimi, 2023). However, the ethical implications of AI use, such as issues of trust, interpretability, and the potential for fake science, highlight the need for comprehensive AI literacy programs that address these concerns and promote responsible use of AI technologies (Faber, Gasparini, & Grote, 2022). The development of AI literacy should be a transdisciplinary effort, embedded across curricula to ensure that all students, regardless of their major, can engage with AI technologies in a conscious and ethically sound manner (Schüller, 2022). Furthermore, AI literacy can empower students to engage in meaningful partnerships with educators, enhancing their feedback literacy and overall learning experience (Tubino & Adachi, 2022).
As AI continues to evolve, higher education institutions must adapt by integrating AI literacy into their programs, ensuring that graduates are equipped with the necessary skills to navigate the complexities of AI in their professional and academic pursuits (Kim, Kim, & Ahn, 2023; Slimi, 2023). This integration should also involve collaboration between academic institutions and AI developers to create safer and more responsible AI models, fostering an environment where AI can be used effectively and ethically in research and education (Kovachоv & Suchikova, 2023). The ETHICAL protocol addresses generative AI tools, defined as systems that create text, code, or other outputs mimicking human-generated content. This distinguishes it from discriminative AI (e.g., predictive models), which falls outside this protocol's scope. The acronym ETHICAL succinctly encapsulates the essential steps (Establish, Thoroughly, Harness, Inspect, Cite, Acknowledge, Look-over) guiding researchers toward ethically sound and transparent use of generative AI tools. This uppercase acronym usage reinforces the protocol's distinctive identity and practical utility. AI literacy is not merely a background concept but a foundational competence for implementing the ETHICAL protocol. Each step of the protocol—from establishing research purposes to verifying AI outputs—depends on researchers’ ability to critically evaluate AI tools, understand their limitations, and apply ethical reasoning in context. Thus, AI literacy enables users to operationalize the protocol's guidance effectively and responsibly. The ETHICAL protocol also builds upon long-standing traditions of research ethics and responsible innovation, including frameworks addressing researcher accountability, technology governance, and data stewardship (Floridi & Cowls, 2022; Resnik, 1998). These foundational perspectives provide the ethical backbone upon which contemporary generative AI-specific policies expand, positioning the protocol as both historically grounded and forward-looking.
The lack of clear protocols for disclosing the use of generative AI in research is a multifaceted issue that intersects with transparency, ethical considerations, and intellectual property rights. The CONSORT-AI extension highlights the necessity for transparency in AI interventions, recommending detailed descriptions of AI components in clinical trials to ensure clarity and reduce bias (Liu et al., 2020; Liu et al., 2020). However, the broader AI research community faces challenges in transparency and reproducibility, as seen in the case of McKinney et al., where the absence of detailed methods and code diminished the scientific value of their AI research (Haibe-Kains et al., 2020). This issue is compounded by the opaqueness of AI systems, where users and even developers often do not fully understand how algorithms make decisions, underscoring the need for transparency at algorithmic, interaction, and social levels (Haresamudram, Larsson, & Heintz, 2023; Rubin, 2020). Furthermore, the ethical implications of AI, such as bias, privacy, and accountability, demand careful consideration, especially as AI systems become more integrated into society (Gaud, 2023). The ethical use of AI also involves informed consent, which is complicated by the need for users to understand the technology sufficiently to make informed decisions, a challenge exacerbated during public health emergencies where AI technologies are rapidly deployed (Pickering, 2021). In the context of generative AI, which can produce novel works by imitating existing human creations, there are significant concerns about authorship and the unauthorized use of existing works, challenging traditional intellectual property rights (Smits & Borghuis, 2022). The potential for digital discrimination further complicates the landscape, as AI decision-making can lead to unfair treatment based on personal data, necessitating transparency to address these ethical and legal concerns (van Nuenen et al., 2020). The investigation of randomized controlled trials involving AI has shown that reporting and methodological quality need improvement, with many trials failing to report key items such as funding and implementation, which are crucial for transparency (Wang et al., 2021). That being said, this paper aims to introduce a proposed protocol for the responsible use of generative AI for research purposes in higher education.
Previous research emphasized that ethical dilemmas associated with generative AI arise significantly when its technical functions intersect with institutional academic arrangements, highlighting the inadequacy of universal checklists and the necessity for institutionally grounded ethics principles (Jeon, Kim, & Park, 2025). Similarly, Sabbaghan and Eaton emphasized the evolving notions of authorship and originality (Sabbaghan & Eaton, 2025), underscoring the importance of human oversight to ethically guide generative AI use. In alignment with these perspectives, our proposed ETHICAL protocol extends beyond technical guidelines, incorporating educational imperatives, institutional governance, and broader societal impacts (Eldakar, Shehata, & Ammar, 2025; Xiao et al., 2025).
Recent literature underscores evolving ethical concerns and pedagogical implications associated with generative AI, highlighting the necessity for updated, comprehensive frameworks. For example, Nguyen outlines crucial ethical and pedagogical principles for AI integration into education (Nguyen, 2025), emphasizing transparency and user accountability. Similarly, Dabis and Csáki specifically analysed policy responses from higher education institutions to generative AI (Dabis & Csáki, 2024), emphasizing the urgency for robust, institution-specific guidelines. These contemporary perspectives further substantiate the need for the ETHICAL protocol, addressing real-world concerns effectively and responsively (Laine, Minkkinen, & Mäntymäki, 2025; Xiao et al., 2025).
Despite increasing use of generative AI in higher education research, a clear and comprehensive ethical protocol remains missing, particularly one that addresses both the technical functionalities of generative AI and broader ethical considerations such as intellectual property, transparency, and reproducibility (Jeon, Kim, & Park, 2025; Sabbaghan & Eaton, 2025). Thus, the present study aims to address this gap by developing a comprehensive protocol—the ETHICAL protocol—to guide researchers in responsibly using generative AI tools in higher education settings.
DEVELOPMENT OF ETHICAL PROTOCOL
The ETHICAL protocol—an acronym for (Establish your purpose, Thoroughly explore options, Harness the appropriate tool, Inspect and verify output, Cite and reference accurately, Acknowledge AI usage transparently, and Look over publisher's guidelines)—is the culmination of a project supported by (Removed for peer review). The development of the protocol proceeded through several methodical stages.
First, the research team conducted a scientometric review to identify existing trends in the responsible use of generative AI for research purposes (Qadhi et al., 2024a). Second, a systematic review employing qualitative synthesis was undertaken to explore researchers' experiences concerning the ethical and responsible use of generative AI in research (Qadhi et al., 2024b). Third, a systematic review of textual evidence was performed based on policy analysis of 74 documents sourced from authorities, universities, publishers, and publication manuals. Selection of the 74 documents followed purposive sampling to ensure representation from each stakeholder category—authorities, universities, publishers, and style manuals—rather than privileging prestige alone. While top-ranked universities and major publishers were prioritized for their policy leadership and transparency, documents from regional and open-access institutions were also included to reflect diverse academic contexts. Nevertheless, we acknowledge that future work should further expand sampling to include smaller institutions and developing regions to enhance representativeness. See reported details in Alduais et al., 2025. This third stage was particularly critical in formulating the ETHICAL protocol, as it provided insights into existing policies and guidelines recommended by stakeholders. These findings were instrumental in guiding researchers and regulating the use of generative AI for research purposes.
Our policy analysis followed a systematic four-phase process. First, we identified 74 documents through purposive sampling of: (1) governmental bodies with AI policy mandates (n = 10), (2) top-50 QS-ranked universities with published guidelines (n = 40), (3) publishers representing >80% (n = 18), and (4) academic style manuals (n = 6). Documents were coded using a hybrid deductive-inductive framework, with deductive codes based on established AI ethics principles (e.g., transparency) and emergent codes from iterative review (e.g., “attribution granularity”). Dual independent coding achieved high reliability (κ = 0.82), with discrepancies resolved through consensus panels. Member checking with five policy authors ensured interpretive validity.
SCOPE OF ETHICAL
The ETHICAL protocol is designed to inform researchers in the higher education sector about acceptable practices for the ethical and responsible use of generative AI. This protocol is the outcome of an inductive analysis of 74 policy documents sourced from 10 authorities (Chinese National Information Security Standardization Technical Committee, 2023; European Commission, 2024; Government of Canada, 2024; Lodge, 2024; Organisation for Economic Co-operation & Development, 2024; PROEDUCA Group, 2024; Saudi Data & AI Authority, 2024; The Russell Group of Universities, 2023; The United Nations Educational Scientific & Cultural Organization, 2023; US National Science Foundation, 2023), 40 universities (Ain Shams University, 2024; Cairo University, 2023; Charles Sturt University, 2024; Charles University Prague, 2024; Chulalongkorn University, 2023; Copenhagen Business School, 2023; Erasmus University Rotterdam, 2024; Harvard University, 2024; Hong Kong Polytechnic University, 2023; Imperial College London, 2024; Massachusetts Institute of Technology, 2024; Michigan State University, 2024; National Taiwan University, 2024; National Tsing Hua University, 2024; National University of Colombia, 2024; Okan University, 2023; Purdue University, 2024; Stanford University, 2023; Swiss Federal Institute of Technology Zurich, 2024; The Open University of Cyprus, 2024; The University of Melbourne, 2024; The University of Tokyo, 2023; Universiti Malaysia Pahang, 2023; University College Cork, 2024; University College Dublin, 2024; University College London, 2024; University of Cambridge, 2024; University of Cape Town, 2023; University of Geneva, 2024; University of Gent, 2024; University of Glasgow, 2024; University of Groningen, 2024; University of Helsinki, 2024; University of Hong Kong, 2024; University of Oslo, 2024; University of Oxford, 2024; University of Queensland, 2024; University of Tartu, 2024; University of Toronto, 2024; Vilnius University, 2024), 18 publishers (American Association for the Advancement of Science, 2024; Cambridge University Press, 2024; Elsevier, 2024; Emerald Publishing, 2024; Frontiers Media, 2024; Journal of the American Medical Association, 2024; Koller et al., 2024; Multidisciplinary Digital Publishing Institute, 2023; Nature Portfolio, 2024; Oxford University Press, 2024; Proceedings of the National Academy of Sciences, 2024; Public Library of Science, 2024; Sage, 2024; Springer Nature, 2024; Taylor & Francis, 2024; The Association for Computing Machinery, 2023; The Institute of Electrical & Electronics Engineers, 2024; Wiley, 2024), and 6 publication manuals (American Psychological Association, 2024; Modern Language Association of America, 2024; The Chicago Manual of Style 18th edition text, 2024; The Queensland University of Technology, 2024; The University of Western Australia, 2024; Victoria University, 2024). Its goal is to guide researchers in using generative AI responsibly, delineating what is considered acceptable and unacceptable based on the perspectives of the involved stakeholders—namely, authorities, universities, publishers, and publication manuals.
Academic researchers engage in proposing solutions to real-world problems (applied sciences) or in generating new knowledge (basic sciences). In both instances, it is imperative to ensure the accuracy and quality of the scientific work, uphold academic integrity, and avoid research misconduct. The ETHICAL protocol (Figure 1) provides researchers with guidance on the responsible use of generative AI, elucidating the requirements to avoid research misconduct and advising on how to optimally leverage existing generative AI tools and platforms.
[IMAGE OMITTED. SEE PDF]
While research disciplines vary in their methodologies, the core challenges of using generative AI—including transparency, accountability, and output validation—transcend field boundaries. The ETHICAL protocol provides foundational principles applicable across disciplines, while allowing flexibility for field-specific adaptations in implementation. This universal approach ensures consistency in addressing ethical concerns while recognizing that individual research teams may supplement these guidelines with domain-specific best practices. It should be noted that the pilot workshops served as exploratory validation, aiming to test feasibility and clarity rather than to generalize statistically. To build on this foundation, future research will implement large-scale, cross-disciplinary evaluations—including multiple universities and diverse academic levels—to measure protocol reliability, usability, and long-term adoption outcomes. Besides, a further ethical dimension concerns data privacy and the protection of proprietary or sensitive research data when interacting with third-party AI platforms. The ETHICAL protocol explicitly advises that researchers verify data-handling policies of any AI service, avoid uploading identifiable or confidential datasets, and comply with institutional and legal data-protection requirements such as GDPR or equivalent national frameworks.
HOW TO USE THIS PAPER
The ETHICAL protocol comprises seven headings and a total of nine items (See Figure 2). Each heading is accompanied by at least one item. The structure of the protocol begins with a concise instruction that outlines the main purpose of each item, followed by relevant examples, and concludes with an elaborated explanation for proper utilization. Notably, the protocol integrates AI literacy, reflecting our belief that awareness is crucial for the responsible use of generative AI. This integration is followed by guidance on exploring existing generative AI tools and platforms, including an analysis of their advantages and disadvantages. The protocol subsequently provides practical instruction on practicing and training in the use of these tools. It culminates with guidance on utilizing generative AI for research purposes, encompassing appropriate practices for disclosure and publication.
[IMAGE OMITTED. SEE PDF]
THE ETHICAL CHECKLIST
Establish your purpose
It is crucial to begin by consulting the generative AI guidelines provided by your workplace, such as higher education institutions or research centers, for research purposes. This step will determine whether you should proceed with utilizing generative AI in your research. Adhering to your workplace guidelines is essential to remain committed to the ethics and policies of your organization, which must always be prioritized.
Item 1: Purpose. Identify your purpose(s) for using generative AI for research purposes.
Examples. Research tasks vary among literature review, data collection, data analysis, data interpretation, and data reporting. Examples include:
Identifying relevant literature.
Identifying a research gap.
Formulating a research question.
Constructing tables for systematic reviews.
Providing descriptions of tables or figures.
Preparing data for a section and converting it into a narrative using a generative AI tool.
Performing qualitative or quantitative data analysis.
Explanation. The first item of the ETHICAL protocol is critical. Researchers need to be aware of what is permissible and impermissible across the different research stages, starting from formulating the research question to finalizing the research product (e.g., original article, review). The main sources for this guidance are the stakeholders involved in the process, including authorities, higher education institutions, publishers, and publication manuals. Publishers have varying policies regarding acceptable research conduct, including editorial work and the peer-reviewing process. For instance, Elsevier warns editors and peer reviewers against using generative AI to determine whether a paper was produced by a human or AI, or to generate peer reviews; it also grants authors the right to use generative AI as a research assistant, provided they disclose this usage transparently (Elsevier, 2024). Overall, publishers generally accept the use of generative AI at any stage of the research process, provided it is reported clearly in the methods section (American Association for the Advancement of Science, 2024; Cambridge University Press, 2024; Frontiers Media, 2024; Journal of the American Medical Association, 2024; Koller et al., 2024; Multidisciplinary Digital Publishing Institute, 2023; Nature Portfolio, 2024; Proceedings of the National Academy of Sciences, 2024; Public Library of Science, 2024; Sage, 2024; Springer Nature, 2024; Taylor & Francis, 2024; The Association for Computing Machinery, 2023; The Institute of Electrical & Electronics Engineers, 2024; Wiley, 2024). However, a few publishers may be stricter, requiring prior permission from the editor, like Oxford University Press, 2024 or rejecting all types of generative AI usage except for language improvement, such as Emerald Publishing, 2024.
Thoroughly explore options
Item 2: Explore. Study carefully the different types of existing generative AI platforms and their offered services relevant to your purpose.
Examples. “Explored several tools, including Elicit, Semantic Scholar, and Connected Papers. Selected Elicit due to its focus on summarizing research findings and its ability to identify key themes across multiple papers.” Generative AI platforms can be classified into two types (See Figure 3). The first type includes those that allow open interaction with the tool and can perform any function; these pose a higher risk of facilitating research misconduct. The second type comprises specialized platforms that use AI to enhance the efficiency of specific services. The first type includes platforms like ChatGPT and POE, which host several large language models (LLMs) such as ChatGPT, GPT, Gemini, Claude, o1-preview, Llama, etc. The second type includes platforms more suited for research purposes, such as Elicit, Consensus, SciSpace, MYRA, and Numerous.ai, among others.
[IMAGE OMITTED. SEE PDF]
Explanation. Since the emergence of LLMs in 2022, generative AI platforms have proliferated (Belcak, Lanzendörfer, & Wattenhofer, 2023). The main challenge now is cultivating sufficient awareness about their advantages and disadvantages (Alaqlobi et al., 2024a; Alaqlobi et al., 2024b; Dabis & Csáki, 2024; Kurtz et al., 2024; Thiga, 2023). We believe this awareness step is critical for the responsible use of AI and can be achieved through promoting AI literacy within higher education institutions. The 40 higher education institutions included in our policy analysis all agree on promoting AI literacy for research purposes, for instance, University of Hong Kong, 2024, as do the 10 authorities that were also included, such as The United Nations Educational Scientific & Cultural Organization, 2023.
Item 3: Select. Choose the generative AI tool or platform that best matches your identified purpose from among the various available options.
Examples. In Table 1, we provide a list—though not exhaustive and open to additions and modifications—of what we consider the most suitable generative AI tools and platforms based on our experiences.
TABLE 1 A list of generative AI tools and platforms for research purposes.
| No. | GenAI tool/platform | Website | Relevant examples of research purposes |
| 1 | SciSpace |
Literature review Data extraction Conceptualization Writing assistant with AI and Citations Paraphrasing Systematic reviews |
|
| 2 | Elicit |
Extract data Conceptualization Systematic reviews |
|
| 3 | Consensus |
Literature review Quotations and evidence |
|
| 4 | MyRA | Conduct qualitative analysis | |
| 5 | Scite | Literature review | |
| 6 | ORKG |
Literature review Extract data |
|
| 7 | Avidnote and Kahubi |
Write with AI Analyse papers Transcribe data |
|
| 8 | Numerous.ai |
Sentiment analysis Quantification of data |
|
| 9 | Sider |
Chat with several GenAI tools. Language Editing Data extraction Translation |
|
| 10 | Poe | Chat with several GenAI tools | |
| 11 |
ChatGPT DeepSeek |
Chat with several GenAI tools. Quantitative data analysis and visualisation (Data analyst, Data analysis) |
|
| 12 | Akkio |
Sentiment analysis Quantification of data |
|
| 13 | Chat with any PDF |
Data extraction Quotations extraction and synthesis Summarize data |
|
| 14 | Perplexity |
Pre-research reading Visualisation of data Data analysis |
|
| 15 |
Scopus AI Wed of Science |
Literature review both general deep research | |
| 16 | ScienceDirect AI |
Literature review both general deep research Establish systematic reviews and meta-analyses |
Explanation. This item emphasizes the importance of awareness, specifically AI literacy, for researchers in higher education institutions. Being aware of the available generative AI tools and platforms is a crucial first step toward their responsible use. A major concern at this stage is to avoid subscribing blindly to all these tools and platforms, especially since none are absolutely free, often restricting high-quality output to paid subscriptions. Researchers should not be negatively influenced by the substantial number of emerging tools and platforms or rush to subscribe without understanding what they offer and the quality of their output. Choosing responsibly after wise exploration is critical for establishing the responsible use of generative AI for research purposes. Universities documented in our policy analysis all mentioned that they are launching free subscriptions for staff and students, limited to a carefully selected set of tools and platforms considered more reliable by the university (Hong Kong Polytechnic University, 2023; Stanford University, 2023; University College London, 2024; University of Cambridge, 2024; University of Hong Kong, 2024).
Harness the appropriate tool
Item 4: Use the Tool Responsibly. Attempt to apply your intended tasks before incorporating the output into your research product without due consideration.
Examples. “Provided Elicit with a list of 50 PubMed IDs related to social media and adolescent mental health”. Used the ‘Summarize’ function with the prompt: ‘What are the main findings regarding the impact of social media use on anxiety and depression in adolescents?’ Some tools and platforms are easily used by simply interacting with them and prompting them to perform certain research tasks. These are usually the riskiest when it comes to generating unreliable science. Platforms like Numerous.ai require basic Excel skills, while others like Elicit require some systematic review skills to obtain the best data and formulate your questions effectively. Being adept at using the selected tool or platform ensures you get the best from it, including avoiding research misconduct.
Explanation. Awareness of existing generative AI tools and platforms is insufficient on its own; this awareness should be followed by practicing their use. After identifying your purpose and selecting an appropriate tool, you should test it. Many of the tools and platforms mentioned in Table 1 offer partial free access (with lower quality or limited tasks), trial subscriptions for a few days, or credits to try their services. Some platforms provide demonstrative videos for their services, which can be sufficient to understand what they offer without the need for a direct trial. Trying out the tools helps ensure you are competent in their use and can maximize their benefits while minimizing risks. Check Figure 4 outlining these steps, helping for responsible use preparation.
[IMAGE OMITTED. SEE PDF]
Inspect and verify output
Item 5: Verify and Review the Output. Copying and pasting content from any generative AI tool or platform without scrutiny is the essence of irresponsible use.
Examples. “Manually checked the accuracy of the summaries generated by Elicit against the original research articles. Compared the synthesized findings with other literature reviews on the topic to ensure completeness. Used a plagiarism checker to verify originality.” Generating tables for systematic reviews from platforms like Elicit or SciSpace can be beneficial, but you need to verify the output for accuracy and fill in any blanks when the platform fails to identify some information that is reported in the papers. Extracting citations from a platform like Chat with any PDF is also advantageous and time-saving, as it can even synthesize the information using your required referencing style. However, before the synthesis step, click on every given quotation and verify that it indeed appears on the specified page in the provided output.
Explanation. Hallucinations—instances where AI generates incorrect or fabricated information—are a major problem threatening the quality of science, not because of the emergence of generative AI tools and platforms, but due to their irresponsible and unethical use (Alkaissi & McFarlane, 2023). We mentioned earlier two categories of these tools and argued that the more open the tool or platform is, the higher the risk of hallucinations. For instance, platforms like POE, ChatGPT, and Sider host several LLMs that you can use freely, and the output is entirely based on your prompts (Boyko et al., 2023; Chang et al., 2024; Franceschelli & Musolesi, 2023; Piller, 2023; Zhou et al., 2025). The risk here is relative and positively correlated with the quality of the input (i.e., prompts). An advantage of these platforms is the ability to compare outputs across different tools (ChatGPT, GPT, Gemini, Claude, o1-preview, etc.). A skilled researcher should leverage this advantage and thoroughly review everything before copying and pasting, avoiding the temptation to expedite the completion of a research product.
Each of these tools has strengths and weaknesses, which are reflected in the type of output generated from the prompts used (See Figure 5). Generating tables or quotations can be risky if multiple attempts are not made to ensure high-quality data extraction, especially when compared to platforms like Elicit and SciSpace, which are more tailored for such tasks and exhibit lower rates of hallucinations, particularly when the output is based on uploaded papers rather than a general database. Some platforms also offer tools for data analysis that are programmed to analyse data quantitatively or qualitatively, such as the Data Analyst and Data Analysis tools in the ChatGPT platform. These are reliable tools for data analysis and visualization but should be used only if you have at least a basic background in statistics to verify every output provided.
[IMAGE OMITTED. SEE PDF]
Cite and reference accurately
Item 6: Document and Reference Properly. Citing and referencing all your sources after verifying them is fundamental to avoiding plagiarism and research misconduct.
Examples. “Elicit. (2024). Elicit: The AI Research Assistant. Retrieved from .” Another context is using SciSpace to locate literature will allow you to export the references provided in a narrative answer or bullet points from the top 10 studies.
Explanation. Another critical issue with generative AI tools and platforms, in addition to hallucinations, is the generation of invalid references or long narratives without proper attribution (‘ChatGPT Is Not Capable of Serving as an Author: Ethical Concerns & Challenges of Large Language Models in Education’, 2023; Perkins, 2023; Qureshi et al., 2023; Yan et al., 2024). For this reason, we promote the use of platforms like Elicit, SciSpace, and Consensus over others like GPT, ChatGPT, or Claude. Suppose you are conducting research on the misuse of generative AI and decide to use SciSpace. The platform will assist you by providing answers to your questions, along with results in the form of narratives or bullet points from the top 10 studies. You can rewrite this content and reinsert the citations after exporting the references. While there is a bias toward incorporating only the top ten papers, this is acceptable for a literature review, though it would not suffice for a systematic review. Copying and pasting the narrative as-is will undoubtedly include mistakes in citations and references or even content inaccuracies. Therefore, verifying all citations and references and re-entering them manually using software like Mendeley or EndNote is the ideal step to avoid these issues.
Acknowledge AI usage transparently
Item 7: Acknowledge the Use of Generative AI. Disclose the use of any generative AI tools and platforms.
Examples. “I acknowledge the use of Microsoft Copilot (version GPT-4, Microsoft, ) to summarize my initial notes and to proofread my final draft.” (University College London, 2024)
Explanation. While authorities and universities have advised disclosing the use of generative AI tools and platforms, publishers and publication manuals have provided clear guidance and samples for such acknowledgments (American Psychological Association, 2024; Modern Language Association of America, 2024; The Chicago Manual of Style 18th edition text, 2024; The Queensland University of Technology, 2024; The University of Western Australia, 2024; Victoria University, 2024). They further suggest detailing this usage in the methods section, especially when it includes steps like data collection, data analysis, or data interpretation, as these are intricately related to the reliability and validity of the presented materials—the quality of the science. In such cases, a researcher should integrate this information into the usual sections based on the context. For instance, if you are conducting a qualitative study and decide to use MYRA for inductive or deductive analysis, you will need to mention this in the design, procedure (including data analysis and interpretation), and even address credibility issues.
Look over publisher's guidelines
Item 8: Check Publisher's Guidelines. Save your time and theirs by reviewing their specific requirements.
Examples. “Confirmed compliance with the Nature Portfolio guidelines on the use of generative AI in research publications.” All publishers require adding a section to disclose the use of generative AI tools and platforms. Some also require detailing this in the methods section. Others may require requesting permission before submitting your research product and having it confirmed by the editor.
Explanation. Currently, all publishers caution editors and their teams against using generative AI tools and platforms to detect plagiarism (Cambridge University Press, 2024; Elsevier, 2024; Frontiers Media, 2024; Multidisciplinary Digital Publishing Institute, 2023; Nature Portfolio, 2024; Oxford University Press, 2024; Sage, 2024; Springer Nature, 2024; Taylor & Francis, 2024; The Institute of Electrical & Electronics Engineers, 2024; Wiley, 2024). Additionally, existing AI detection tools still have low accuracy in distinguishing between human and AI-generated research. These policies include both overt and covert implications. Overtly, publishers warn editors not to reject papers simply based on suspicions or reports from current AI-specific plagiarism detectors. They cannot reject a paper unless the similarity exceeds the permissible rate allowed in their journal, which pertains to plagiarism rather than AI-generated content.
Covertly, however, many editors and peer reviewers across various publishers may reject papers by levying accusations against authors, actions which are sometimes approved by editors instead of being prevented. Publishers often have limited authority over editors because they do not wish to lose them, especially since their work is voluntary. Publishers benefit from this arrangement as they are not required to share significant revenues with editors or peer reviewers. The power granted to them is effectively the compensation for their unpaid work.
Item 9: Finalize Your Research Paper. While proofreading prior to the advent of generative AI was the last step before submitting your paper, now this stage should include checking that you have not included any parts copied and pasted without disclosure or necessity.
Examples. Platforms like Claude or GPT often conclude their output with statements indicating what they have done, such as “This is the list you asked me to create,” or “Above are the paragraphs you requested.” Including these in your submission is embarrassing and serves as an obvious indicator for editors and peer reviewers to reject your paper for violating research ethics.
Explanation. Whether you are using generative AI platforms like Elicit for data extraction or others like Claude, it is essential to review your work thoroughly before submitting it to a journal. Having peers review your work, or doing so yourself with patience and care, can help you avoid desk rejections or accusations from editors and peer reviewers. At the very least, you will be prepared to defend your work if such accusations arise, with the editor more likely to support you if you have taken appropriate precautions. Remember, in the end, just as publishers strive to keep their volunteer editors satisfied, editors likewise seek to satisfy their peer reviewers—who also work voluntarily—by granting them the power of rejection and acknowledging it even when it is not necessary.
PILOTING THE ETHICAL PROTOCOL
The validation process for the ETHICAL protocol involved two structured workshops conducted at X University in October and November 2024. These sessions employed a mixed-methods evaluation framework to systematically assess both the practical implementation and ethical decision-making outcomes of the protocol. The first workshop engaged 15 faculty members from diverse disciplines, while the second involved 15 graduate students from engineering programs, ensuring representation across academic roles and research domains. The piloting process was reviewed and approved by the Institutional Review Board of Qatar University (QU-IRB 019/2024-E), ensuring that all research activities conformed to the principles outlined in the Declaration of Helsinki. Ethical safeguards were implemented throughout the workshops, including voluntary participation, confidentiality assurances, and anonymization of all collected data.
Both workshops followed an identical three-phase evaluation structure. In the initial phase, participants engaged in case-based simulations where they applied the protocol to controlled research scenarios, including an AI-assisted literature review that required the detection of hallucinated content and a quantitative analysis that demanded output verification. These simulations were meticulously designed to test specific components of the protocol under realistic conditions and lasted for 90 min each, with researcher observations supplemented by screen recordings and field notes that documented both successful implementations and areas of confusion.
The evaluation methodology integrated multiple measurement approaches to ensure a comprehensive assessment. Pre- and post-workshop questionnaires were administered to all participants to measure protocol comprehension (via a 12-item scale with a Cronbach's alpha of .79), confidence in identifying ethical risks using a 7-point Likert scale, and behavioural intentions regarding the future use of AI in research. During the workshops, faculty evaluations focused on the fidelity of the protocol's implementation, with careful documentation of the completeness of checklist applications and the duration of deliberation over ethical grey areas, which averaged twenty-three minutes per case. In contrast, the evaluation of graduate students centred on practical usability, with particular attention paid to the correct identification of high-risk scenarios—achieving an accuracy rate of 83% following the intervention—and the appropriateness of tool selection decisions.
Quantitative analysis of the data revealed significant improvements in key competency areas. For instance, there was a 41% increase in the recognition of required disclosure elements, with mean pre-test scores of 2.9 rising to 4.1 in the post-test, a difference that reached statistical significance (p < .01). Moreover, 78% of the participants were able to correctly classify AI-use scenarios by risk level after the workshop intervention, compared to just 32% during the preliminary assessments. These statistical outcomes were further reinforced by qualitative insights derived from documented discussion sessions. These sessions were systematically analysed using thematic coding, which revealed that faculty members were particularly concerned with the need for discipline-specific adaptations—such as for qualitative data analysis—while graduate students focused on the necessity for clearer institutional policies governing AI-assisted thesis research.
The workshop designs were intentionally crafted to preserve ecological validity by incorporating realistic constraints and decision points that reflect the challenges faced by researchers. For example, one simulation required participants to evaluate AI-generated literature summaries under time constraints similar to those of grant proposal deadlines, while another presented conflicting guidelines from institutions and publishers to assess the protocol's flexibility. This realistic approach provided practical insights that extended beyond theoretical compliance, illustrating how researchers navigate ethical dilemmas in real-world contexts.
All participants provided informed written consent for the inclusion of their data in accordance with X University's ethical review procedures, and all anonymized datasets. The outcomes from the workshops directly informed three key refinements to the protocol: the incorporation of expanded discipline-specific examples, enhanced guidance on disclosing the use of AI in research methods, and the creation of a quick-reference decision flowchart. These empirical validations not only address the challenges of translating ethical principles into actionable research practices but also ensure that the protocol remains adaptable across varied academic contexts.
While these pilot workshops offered initial practical validation and valuable insights, their limited scope (15 participants each) and exploratory design constrain generalizability. Therefore, we acknowledge their preliminary, informal nature. Future research should employ larger-scale, formal empirical validation across multiple disciplines and institutional settings to strengthen and generalize the protocol's effectiveness and applicability.
DISCUSSION
Using an approach grounded in policy analysis evidence, we developed the ETHICAL protocol. The primary aim of this protocol is to bridge the gap between the prohibition of generative AI tools and platforms and their irresponsible and unethical use, which can result in research misconduct. We propose this initiative as an open invitation to all academics and stakeholders to collaboratively develop a more plausible and standardized framework. This framework aims to aid researchers in improving AI literacy while promoting the responsible use of emerging AI technologies. Unlike existing frameworks such as CONSORT-AI (Liu et al., 2020) or TRIPOD-AI (Collins et al., 2024), which focus on clinical or predictive modelling contexts, the ETHICAL protocol is designed as a cross-disciplinary, step-by-step guide for all stages of research using generative AI. It uniquely integrates ethical reasoning, AI literacy, and institutional policy alignment into a unified structure applicable beyond healthcare or computational domains. This holistic orientation fills a critical gap between technical validation checklists and comprehensive ethical governance.
Existing protocols and checklists primarily focus on the technical aspects of incorporating AI in research, often within specific domains like clinical trials or prediction modelling. For instance, SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence) (Cruz Rivera et al., 2020; Cruz Rivera et al., 2020) and CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) (Liu et al., 2020; Liu et al., 2020) offer reporting guidelines for AI interventions in clinical trials, emphasizing technical specifications and human-AI interaction. Similarly, TRIPOD-AI (Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis–Artificial Intelligence) (Collins et al.) addresses the reporting of AI-based prediction models, while PROBAST-AI (Prediction model Risk of Bias Assessment Tool–Artificial Intelligence) (Collins et al., 2021) focuses on assessing bias in such models. Even PRISMA-AI (Reporting Items for Systematic Reviews and Meta-Analyses–Artificial Intelligence) (Cacciamani et al., 2023) aims to improve the reporting of systematic reviews involving AI, primarily concentrating on reproducibility and methodological transparency. These checklists, while valuable for ensuring technical rigor, lack a comprehensive framework for the responsible use of generative AI across diverse research contexts. The ETHICAL protocol, in contrast, addresses this gap by providing a holistic approach that encompasses not only technical considerations but also ethical implications, emphasizing responsible practices throughout the research lifecycle, from purpose identification to transparent acknowledgment of AI usage. This broader scope differentiates ETHICAL from existing checklists, positioning it as a more comprehensive guide for researchers navigating the complexities of generative AI in higher education.
Recent discussions have also highlighted the need for specific guidelines regarding the disclosure and documentation of AI usage in research. Hosseini et al. (2025) propose detailed disclosures encompassing who used the AI, when, with what prompts, and on which sections of the paper, along with submitting AI-generated text as supplementary material. Cotton et al. suggest preventative measures for educators, including educating students about AI and plagiarism, requiring drafts, and using plagiarism detection tools (Cotton, Cotton, & Shipway, 2023). Furthermore, Farrokhnia et al. utilize a SWOT analysis to explore the opportunities and threats of ChatGPT in education, highlighting both its potential benefits and risks (Farrokhnia et al., 2024). These contributions underscore the growing awareness of the multifaceted implications of AI in research and the ongoing efforts to establish responsible practices. Finally, the limitations of current AI tools in conducting reliable literature reviews have also been documented (Haman & Školník, 2023), emphasizing the need for careful evaluation and human oversight in research processes. The need for AI-specific quality assessment tools, like a proposed extension to QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies 2) (Jayakumar et al., 2022), further emphasizes the evolving landscape of AI in research and the continuous development of frameworks for its responsible application.
We acknowledge that our proposed protocol was not entirely based on a fully systematic review involving specialized committees and formal meetings for its formalization. However, as detailed in the introduction, our efforts to synthesize existing data on this issue—including the analysis of 74 policy documents—have played a vital role in establishing our foundational understanding and in developing this final protocol. It is our hope that researchers will be encouraged to use generative AI tools and platforms with full awareness and the requisite soft skills to enhance their research capabilities and scientific contributions, rather than feeling ashamed or afraid of utilizing these technologies or resorting to their covert use.
We also advocate for publishers to be more transparent about their policies regarding the publication of research products that employ generative AI tools and platforms. Specifically, we suggest that editors should exercise caution in their decision-making and avoid rejecting papers solely because the researchers have reported the use of generative AI tools and platforms. Peer reviewers should not be unduly influenced by editors to raise issues, such as claiming that a paper was generated by a generative AI without substantive evidence. They should be prompted to demonstrate a sense of responsibility comparable to that expected of authors. Authors, in turn, should approach their work responsibly, ensuring that when they use generative AI, it is done ethically and transparently to assist and support their research, rather than allowing these tools to function as ghost authors. The protocol's interdisciplinary design reflects the universal nature of core research ethics principles. While individual disciplines may develop supplementary guidelines for specialized applications (e.g., clinical trial reporting with CONSORT-AI), the ETHICAL framework establishes baseline standards for all generative AI use in academic research. Although our inductive, document-based methodology provided robust foundational insights, we recognize that the absence of formal expert committees or broader stakeholder consensus during the initial protocol development may affect its perceived comprehensiveness. Future iterations could benefit from integrating systematic expert review panels and broader stakeholder involvement, enhancing the protocol's robustness, relevance, and acceptability.
The development of the ETHICAL protocol carries significant policy implications for higher education institutions, publishers, and authorities. Specifically, institutions should adopt transparent policies regarding generative AI usage in research, including mandating disclosures and providing training to foster AI literacy among researchers and students (Al-Emran et al., 2025; Keith et al., 2025). Publishers should establish clear guidelines regarding AI-generated content to maintain academic integrity and support transparent reporting. Finally, governmental and educational authorities are encouraged to implement policies promoting AI literacy and ethical compliance within academic research contexts (Laine, Minkkinen, & Mäntymäki, 2025).
Despite comprehensive policy analysis and empirical validation, this study has some limitations. First, the ETHICAL protocol was developed based on an analysis of documents and experiences primarily from top-ranked universities and publishers; thus, it might not fully capture unique ethical concerns in smaller or specialized institutions. Second, although workshops validated the protocol, broader empirical testing across diverse contexts and disciplines is necessary to further assess its generalizability and practical utility. Future studies should aim to empirically evaluate protocol effectiveness longitudinally across different academic environments.
CONCLUSION
The ETHICAL protocol provides a comprehensive, practical guide for responsibly integrating generative AI in higher education research, addressing critical gaps identified in existing frameworks. Its core contributions include clear guidelines for AI literacy, selection and verification of generative AI outputs, transparent acknowledgment, and adherence to publisher guidelines. By piloting the protocol in diverse academic contexts, this study demonstrates its practical utility and adaptability. We invite further collaboration from academic communities, policymakers, and publishers to refine and universally adopt this protocol, fostering ethically robust and scientifically credible research practices.
ACKNOWLEDGMENTS
The authors would like to acknowledge the financial support from Qatar National Research Fund (QNRF), administered by Qatar University, Doha, Qatar. This research was funded by the Qatar National Research Fund (QNRF), Academic Research Grant (ARG), granted to the College of Education, Qatar University, under the research project ARG01-0516-230177.
CONFLICT OF INTEREST STATEMENT
The authors declare that they have no competing interests.
DECLARATIONS
All contributing authors possess extensive experience—each with over 15 years—in a wide range of higher education institutions worldwide. Despite originating from diverse academic disciplines, their collective expertise converges on research conduct ethics, advanced research skills in higher education, and the ethical and responsible application of artificial intelligence within higher education contexts.
ETHICAL APPROVAL AND CONSENT TO PARTICIPATE
Institutional Review Board Statement: All participants consented for the inclusion of their data. The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of Qatar University (protocol code QU-IRB 019/2024-E on January 23, 2024).
CONSENT FOR PUBLICATION
Not applicable.
GENERATIVE AI DISCLOSURE STATEMENT
The authors acknowledge the use of the SCISPACE platform () and the Consensus platform () for the identification and retrieval of relevant scholarly literature, utilizing the prompt: ‘Policies and protocols for the responsible use of generative AI for research purposes.’ Additionally, the authors employed the POE platform (), which hosts various LLMs such as ChatGPT, Claude, Gemini, GPT, and Mistral-Large, among others, for language improvement, including refinement of the abstract and conclusion sections, using the prompt: “Check readability and language accuracy for the following paragraphs.” These platforms and tools were utilized during the period from October 1, 2024, to October 10, 2024, to facilitate the research process.
APPENDIX
Appendix 1 ETHICAL Checklist
Ain Shams University. 2024. Artificial Intelligence Instructions. Faculty of Engineering: Ain Shams University. https://eng.asu.edu.eg/education/undergraduates/bylaws/education/1663709
Alaqlobi, O., A. Alduais, F. Qasem, and M. Alasmari. 2024a. “A SWOT analysis of generative AI in applied linguistics: Leveraging strengths, addressing weaknesses, seizing opportunities, and mitigating threats.” F1000Research 13: 1040. https://doi.org/10.12688/f1000research.155378.1
Alaqlobi, O., A. Alduais, F. Qasem, and M. Alasmari. 2024b. “Artificial intelligence in applied (linguistics): A content analysis and future prospects.” Cogent Arts and Humanities 11(1). https://doi.org/10.1080/23311983.2024.2382422
Alduais, A., S. Qadhi, Y. Chaaban, and M. Khraisheh. 2025. “Utilizing Generative AI Responsibly and Ethically for Research Purposes in Higher Education: A Policy Analysis.” Serials Review 51(34): 120–170, https://doi.org/10.1080/00987913.2025.2581429
Al‐Emran, M., M. A. Al‐Sharafi, B. Foroughi, N. Al‐Qaysi, D. Mansoor, A. Beheshti, and N. Ali. 2025. “Evaluating the Influence of Generative AI on Students' Academic Performance Through the Lenses of TPB and TTF Using a Hybrid SEM‐ANN Approach.” Education And Information Technologies 30: 17557–17587, https://doi.org/10.1007/s10639‐025‐13485‐w
Alkaissi, H., and S. I. McFarlane. 2023. “Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.” Cureus Journal of Medical Science 15(2): e35179. https://doi.org/10.7759/cureus.35179
American Association for the Advancement of Science. 2024. Editorial Policies: Image and Text Integrity. Science Journals. https://www.science.org/content/page/science‐journals‐editorial‐policies#image‐text
American Psychological Association. 2024. How to cite ChatGPT. American Psychological Association. https://apastyle.apa.org/blog/how‐to‐cite‐chatgpt
Baek, S.‐J., and Y.‐H. Shin. 2021. “Artificial Intelligence(AI) Fundamental Education Design for Non‐major Humanities.” Journal of Digital Convergence 19(5): 285–293. https://doi.org/10.14400/JDC.2021.19.5.285
Belcak, P., L. A. Lanzendörfer, and R. Wattenhofer. 2023. Examining the Emergence of Deductive Reasoning in Generative Language Models. arXiv preprint arXiv:2306.01009, https://doi.org/10.48550/arXiv.2306.01009
Bennett, L. 2023. “Optimising the Interface between Artificial Intelligence and Human Intelligence in Higher Education.” International Journal of Teaching, Learning and Education 2(3): 12–25. https://doi.org/10.22161/ijtle.2.3.3
Bethencourt‐Aguilar, A., D. Castellanos‐Nieves, J. J. Sosa‐Alonso, and M. Area‐Moreira. 2023. “Use of Generative Adversarial Networks (GANs) in Educational Technology Research.” Journal of New Approaches in Educational Research 12(1): 153–170. https://doi.org/10.7821/naer.2023.1.1231
Boyko, J., J. Cohen, N. Fox, M. H. Veiga, J. I.‐H. Li, J. Liu, B. Modenesi, et al. 2023. An Interdisciplinary Outlook on Large Language Models for Scientific Research. arXiv preprint arXiv:2311.04929, https://doi.org/10.48550/arXiv.2311.04929
Cacciamani, G. E., T. N. Chu, D. I. Sanford, A. Abreu, V. Duddalwar, A. Oberai, C.‐C. J. Kuo, et al. 2023. “PRISMA AI reporting guidelines for systematic reviews and meta‐analyses on AI in healthcare.” Nature Medicine 29(1): 14–15. https://doi.org/10.1038/s41591‐022‐02139‐w
Cairo University. 2023. FCAI Policy and Guidelines for use of Generative AI in Postgraduate Studies and Research. Cairo University. https://fcai.cu.edu.eg/PG/wp‐content/uploads/2023/09/FCAI‐GAI‐Use‐Guidelines‐v1.1‐fnl.pdf
Cambridge University Press. 2024. Authorship and contributorship for journals: AI Contributions to Research Content. Cambridge University Press. https://www.cambridge.org/core/services/publishing‐ethics/authorship‐and‐contributorship‐journals#ai‐contributions‐to‐research‐content
Chang, Y., X. Wang, J. Wang, Y. Wu, L. Yang, K. Zhu, H. Chen, et al. 2024. “A survey on evaluation of large language models.” ACM Transactions on Intelligent Systems and Technology 15: 1–45. https://doi.org/10.1145/3641289
Charles Sturt University. 2024. Publication & AI. Charles Sturt University. https://opentext.csu.edu.au/usingai/chapter/publication‐and‐ai/
Charles University Prague. 2024. Recommendations Regarding The Use of Generative Artificial Intelligence For Research And Researchers. Charles University Prague. https://ai.cuni.cz/AIEN‐14‐version1‐ai_3_en.pdf
ChatGPT is not capable of serving as an author: ethical concerns and challenges of large language models in education. 2023. International Research Journal of Modernization in Engineering Technology and Science. https://doi.org/10.56726/irjmets45212
Chinese National Information Security Standardization Technical Committee. 2023. Basic Safety Requirements for Generative Artificial Intelligence Services (Draft for Feedback). Chinese National Information Security Standardization Technical Committee. https://cset.georgetown.edu/publication/china‐safety‐requirements‐for‐generative‐ai/
Chulalongkorn University. 2023. Chulalongkorn University Principles and Guidelines for using AI Tools. Chulalongkorn University. https://www.chula.ac.th/en/news/125190/
Collins, G. S., P. Dhiman, C. L. Andaur Navarro, J. Ma, L. Hooft, J. B. Reitsma, P. Logullo, et al. 2021. “Protocol for development of a reporting guideline (TRIPOD‐AI) and risk of bias tool (PROBAST‐AI) for diagnostic and prognostic prediction model studies based on artificial intelligence.” BMJ Open 11(7): e048008. https://doi.org/10.1136/bmjopen‐2020‐048008
Collins, G. S., K. G. M. Moons, P. Dhiman, R. D. Riley, A. L. Beam, B. Van Calster, M. Ghassemi, et al. 2024. “TRIPOD+AI statement: Updated guidance for reporting clinical prediction models that use regression or machine learning methods.” Bmj 385, e078378. https://doi.org/10.1136/bmj‐2023‐078378
Copenhagen Business School. 2023. A Guide to Working with Integrity as a CBS Student: Generative Artificial Intelligence. Copenhagen Business School. https://libguides.cbs.dk/c.php?g=684990&p=5136839
Cotton, D. R. E., P. A. Cotton, and J. R. Shipway. 2024. “Chatting and cheating: Ensuring academic integrity in the era of ChatGPT.” Innovations in Education and Teaching International, 61(2): 228–239, https://doi.org/10.1080/14703297.2023.2190148.
Cruz Rivera, S., X. Liu, A.‐W. Chan, A. K. Denniston, M. J. Calvert, H. Ashrafian, A. L. Beam, et al. 2020. “Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT‐AI extension.” Lancet Digital Health 2(10): e549–e560. https://doi.org/10.1016/S2589‐7500(20)30219‐3
Cruz Rivera, S., X. Liu, A.‐W. Chan, A. K. Denniston, M. J. Calvert, A. Darzi, C. Holmes, et al. 2020. “Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT‐AI extension.” Nature Medicine 26(9): 1351–1363. https://doi.org/10.1038/s41591‐020‐1037‐7
Dabis, A., and C. Csáki. 2024. “AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI.” Humanities & Social Sciences Communications 11(1): 1006. https://doi.org/10.1057/s41599‐024‐03526‐z
Eldakar, M. A. M., A. M. K. Shehata, and A. S. A. Ammar. 2025. “What motivates academics in Egypt toward generative AI tools? An integrated model of TAM, SCT, UTAUT2, perceived ethics, and academic integrity.” Information Development 41(3): 747–765. https://doi.org/10.1177/02666669251314859
Elsevier. 2024. Publishing ethics. Elsevier. https://www.elsevier.com/about/policies‐and‐standards/publishing‐ethics
Emerald Publishing. 2024. Emerald Publishing's stance on AI tools and authorship. Emerald Publishing. https://www.emeraldgrouppublishing.com/news‐and‐press‐releases/emerald‐publishings‐stance‐ai‐tools‐and‐authorship
Erasmus University Rotterdam. 2024. Generative AI Usage Guidelines. Erasmus University Rotterdam. https://www.eur.nl/en/media/2024‐07‐policyontheuseofgenaiandthephd‐trajectory
European Commission. 2024. Living guidelines on the responsible use of generative AI in research. European Commission. https://research‐and‐innovation.ec.europa.eu/research‐area/industrial‐research‐and‐innovation/artificial‐intelligence‐ai‐science_en
Faber, H. C., A. Gasparini, and M. G. Grote. 2022. “Artificial Intelligence‐based tools in the context of Open Science: PhD on Track as a resource.” Septentrio Conference Series, 1. https://doi.org/10.7557/5.6636
Farrokhnia, M., S. K. Banihashem, O. Noroozi, and A. Wals. 2024. “A SWOT analysis of ChatGPT: Implications for educational practice and research.” Innovations in Education and Teaching International 61(3): 460–474. https://doi.org/10.1080/14703297.2023.2195846
Floridi, L., and J. Cowls. 2022. “A Unified Framework of Five Principles for
Franceschelli, G., and M. Musolesi. 2023. On the Creativity of Large Language Models.
Frontiers Media. 2024. Author guidelines: Artificial intelligence. Frontiers Media. https://www.frontiersin.org/journals/artificial‐intelligence/for‐authors/author‐guidelines
Gaud, D. 2023. “Ethical Considerations for the Use of AI Language Model.” International Journal for Research in Applied Science and Engineering Technology 11(7): 6–14. https://doi.org/10.22214/ijraset.2023.54513
Government of Canada. 2024. Guide on the use of generative artificial intelligence. Government of Canada. https://www.canada.ca/en/government/system/digital‐government/digital‐government‐innovations/responsible‐use‐ai/guide‐use‐generative‐ai.html
Smits, J., T. Borghuis. (2022). Generative AI and Intellectual Property Rights. In: Custers B., Fosch‐Villaronga E. (eds) Law and Artificial Intelligence. Information Technology and Law Series, Vol. 35, The Hague, T.M.C. Asser Press, https://doi.org/10.1007/978‐94‐6265‐523‐2_17
Haibe‐Kains, B., G. A. Adam, A. Hosny, F. Khodakarami, T. Shraddha, R. Kusko, S.‐A. Sansone, et al. 2020. “Transparency and reproducibility in artificial intelligence.” Nature 586(7829): E14–E16. https://doi.org/10.1038/s41586‐020‐2766‐y
Haman, M., and M. Školník. 2024. “Using ChatGPT to conduct a literature review.” Accountability in Research 31(8): 1244–1246, https://doi.org/10.1080/08989621.2023.2185514
Haresamudram, K., S. Larsson, and F. Heintz. 2023. “Three levels of AI transparency.” Computer 56(2): 93–100. https://doi.org/10.1109/MC.2022.3213181
Harvard University. 2024. Initial guidelines for the use of Generative AI tools at Harvard. Harvard University. https://huit.harvard.edu/ai/guidelines
Hong Kong Polytechnic University. 2023. Guidelines for Students on the Use of Generative Artificial Intelligence (GenAI). Hong Kong Polytechnic University. https://www.polyu.edu.hk/en/ar/students‐in‐taught‐programmes/use‐of‐genai/
Hosseini, M., B. Gordijn, G. E. Kaebnick, and K. Holmes. (2025). Disclosing Generative AI Use for Writing Assistance Should be Voluntary. Research ethics, Advance online publication, https://doi.org/10.1177/17470161251345499.
Imperial College London. 2024. Generative AI guidance. Imperial College London. https://www.imperial.ac.uk/admin‐services/library/learning‐support/generative‐ai‐guidance/
Jayakumar, S., V. Sounderajah, P. Normahani, L. Harling, S. R. Markar, H. Ashrafian, and A. Darzi. 2022. “Quality assessment standards in artificial intelligence diagnostic accuracy systematic reviews: A meta‐research study.” Npj Digital Medicine 5(1): 11. https://doi.org/10.1038/s41746‐021‐00544‐y
Jeon, J., L. Kim, and J. Park. 2025. “The ethics of generative AI in social science research: A qualitative approach for institutionally grounded AI research ethics.” Technology in Society 81: 102836. https://doi.org/10.1016/j.techsoc.2025.102836
Journal of the American Medical Association. 2024. Editorial Policies for Authors. American Medical Association. https://jamanetwork.com/journals/jama/pages/instructions‐for‐authors#SecAuthorshipCriteriaandContributions
Keith, M., E. Keiller, C. Windows‐Yule, I. Kings, and P. Robbins. 2025. “Harnessing generative AI in chemical engineering education: Implementation and evaluation of the large language model ChatGPT v3.5.” Education for Chemical Engineers 51: 20–33. https://doi.org/10.1016/j.ece.2025.01.002
Kim, Y., J.‐H. Kim, and H. Ahn. 2023. “A study on AI literacy of university students in language and language education.” Journal of the Korea Contents Association 23(1): 165–174. https://doi.org/10.5392/jkca.2023.23.01.165
Koller, D., A. Beam, A. Manrai, E. Ashley, X. Liu, J. Gichoya, C. Holmes, et al. 2024. “Why we support and encourage the use of large language models in NEJM AI submissions.” NEJM AI 1(1): AIe2300128, https://doi.org/10.1056/AIe2300128
Kovachоv, S., and Y. Suchikova. 2023. “Talk to me: a dialogue with artificial intelligence about its use in education and research.” Scientific Papers of Berdiansk State Pedagogical University Series Pedagogical Sciences 1(1): 43–55. https://doi.org/10.31494/2412‐9208‐2023‐1‐1‐43‐55
Kurtz, G., M. Amzalag, N. A. Shaked, Y. Zaguri, D. Kohen‐Vacs, E. Gal, G. Zailer, and E. Barak‐Medina. 2024. “Strategies for Integrating Generative AI into Higher Education: Navigating Challenges and Leveraging Opportunities.” Education Sciences 14(5): 503, https://doi.org/10.3390/educsci14050503
Kwasi, D., and A. Halil. 2019. “Artificial intelligence modules for higher educational institutions.” International Journal of Computer Applications 178(34): 17–21. https://doi.org/10.5120/ijca2019919205
Laine, J., M. Minkkinen, and M. Mäntymäki. 2025. “Understanding the ethics of generative AI: established and new ethical principles.” Communications of the Association for Information Systems 56: 1–25. https://doi.org/10.17705/1CAIS.05601
Liu, X., S. Cruz Rivera, D. Moher, M. J. Calvert, A. K. Denniston, H. Ashrafian, A. L. Beam, et al. 2020. “Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT‐AI extension.” Lancet Digital Health 2(10): e537–e548. https://doi.org/10.1016/S2589‐7500(20)30218‐1
Liu, X., S. Cruz Rivera, D. Moher, M. J. Calvert, A. K. Denniston, A.‐W. Chan, A. Darzi, et al. 2020. “Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT‐AI extension.” Nature Medicine 26(9): 1364–1374. https://doi.org/10.1038/s41591‐020‐1034‐x
Lodge, J. M. 2024. The evolving risk to academic integrity posed by generative artificial intelligence: Options for immediate action. Australian Government Tertiary Education Quality and Standards Agency. https://www.teqsa.gov.au/about‐us/news‐and‐events/latest‐news/addressing‐risk‐genai‐award‐integrity
Massachusetts Institute of Technology. 2024. Initial guidance for use of Generative AI tools. Massachusetts Institute of Technology. https://ist.mit.edu/ai‐guidance
Michigan State University. 2024. Generative AI in Research. Michigan State University. https://research.msu.edu/generative‐ai
Slimi, Z. (2023). “The Impact of Artificial Intelligence on Higher Education: An Empirical Study.” European Journal of Educational Sciences, 10(1): 17–33, https://doi.org/10.19044/ejes.v10no1a24
Modern Language Association of America. 2024. How do I cite generative AI in MLA style? Modern Language Association of America. https://style.mla.org/citing‐generative‐ai/
Multidisciplinary Digital Publishing Institute. 2023. MDPI's Updated Guidelines on Artificial Intelligence and Authorship. MDPI. https://www.mdpi.com/about/announcements/5687
National Taiwan University. 2024. Guidance for Use of Generative AI Tools for Teaching and Learning. National Taiwan University. https://www.dlc.ntu.edu.tw/en/ai‐tools‐en/
National Tsing Hua University. 2024. Integrating Ethical Guidelines for Generative AI into NTHU Course Syllabi. National Tsing Hua University. https://curricul.site.nthu.edu.tw/p/406‐1208‐248378,r10285.php?Lang=zh‐tw.
National University of Colombia. 2024. Generative AI Policy. National University of Colombia. https://provost.columbia.edu/content/office‐senior‐vice‐provost/ai‐policy
Nature Portfolio. 2024. Artificial Intelligence (AI). Springer Nature. https://www.nature.com/nature‐portfolio/editorial‐policies/ai#ai‐authorship
Nguyen, K. V. 2025. “The use of generative AI tools in higher education: ethical and pedagogical principles.” Journal of Academic Ethics. 23(3): 1435–1455. https://doi.org/10.1007/s10805‐025‐09607‐1
Okan University. 2023. Guide to the Use of Generative Artificial Intelligence (Gen AI) Tools. Okan University. https://www.okan.edu.tr/uploads/pages/kilavuz/genaieng‐27022023‐rev.pdf
Organisation for Economic Co‐operation and Development. 2024. AI, data governance and privacy: Synergies and areas of international co‐operation. Organisation for Economic Co‐operation and Development. https://www.oecd‐ilibrary.org/science‐and‐technology/ai‐data‐governance‐and‐privacy_2476b1a4‐en
Oxford University Press. 2024. Author use of generative Artificial Intelligence (AI). Oxford University Press. https://academic.oup.com/pages/authoring/books/author‐use‐of‐artificial‐intelligence
Perkins, M. 2023. “Academic integrity considerations of AI large language models in the post‐pandemic era: ChatGPT and beyond.” Journal of University Teaching and Learning Practice 20(2): 1–24, https://doi.org/10.53761/1.20.02.07
Pickering, B. 2021. “Trust, but verify: informed consent, ai technologies, and public health emergencies.” Future Internet 13(5): 132. https://doi.org/10.3390/fi13050132
Piller, E. 2023. “The ethics of (non)disclosure: large language models in professional, nonacademic writing contexts.” Rupkatha Journal on Interdisciplinary Studies in Humanities 15(4): 1–27, https://doi.org/10.21659/rupkatha.v15n4.02
Proceedings of the National Academy of Sciences. 2024. Authorship and contributions. National Academy of Sciences. https://www.pnas.org/author‐center/editorial‐and‐journal‐policies#editorial‐policies
PROEDUCA Group. 2024. Guide for the Responsible Use of Generative AI in Research Tasks. PROEDUCA Group. https://www.unir.net/wp‐content/uploads/2024/08/Proeduca‐Guide‐for‐the‐Responsible‐Use‐of‐Generative‐AI‐in‐Research‐Tasks‐EN.pdf
Public Library of Science. 2024. Ethical Publishing Practice. PLoS. https://journals.plos.org/plosone/s/ethical‐publishing‐practice#loc‐artificial‐intelligence‐tools‐and‐technologies
Purdue University. 2024. Artificial Intelligence (AI). Purdue University. https://guides.lib.purdue.edu/c.php?g=1371380&p=10135065
Qadhi, S., A. M. S. Alduais, Y. Chaaban, and M. Khraisheh. (2024a). Experiences of Academics, Graduates, and Undergraduates in Using Generative AI in Research(Un)ethically and(Ir)responsibly: A Title Registration of Systematic Review of Qualitative Synthesis. Retrieved from osf.io/n76m5.
Qadhi, S. M., A. Alduais, Y. Chaaban, and M. Khraisheh. 2024b. “Generative AI, Research Ethics, and Higher Education Research: Insights from a Scientometric Analysis.” Information 15(6): 325., https://doi.org/10.3390/info15060325.
Qureshi, R., D. Shaughnessy, K. A. R. Gill, K. A. Robinson, T. Li, and E. Agai. 2023. “Are ChatGPT and large language models ‘the answer’ to bringing us closer to systematic review automation?.” Systematic Reviews 12(1): 72. https://doi.org/10.1186/s13643‐023‐02243‐z
Resnik, D. B. 1998. The Ethics of Science An Introduction (1st ed.). Routledge.
Rubin, V. 2020. “AI Opaqueness: What Makes AI Systems More Transparent?.” Proceedings of the Annual Conference of CAIS /Actes Du Congrès Annuel de l'ACSI. https://doi.org/10.29173/cais1139
Sabbaghan, S., and S. E. Eaton. 2025. “Navigating the ethical frontier: graduate students' experiences with generative AI‐mediated scholarship.” International Journal of Artificial Intelligence in Education 35: 1860–1886, https://doi.org/10.1007/s40593‐024‐00454‐6
Sage. 2024. Using AI in peer review and publishing. Sage Publications. https://us.sagepub.com/en‐us/nam/using‐ai‐in‐peer‐review‐and‐publishing
Saudi Data and AI Authority. 2024. Generative AI Principles for governmental sectors. Saudi Data and AI Authority. https://sdaia.gov.sa/ar/SDAIA/about/Files/GenAIGuidelinesForGovernmentARCompressed.pdf
Schüller, K. 2022. “Data and AI literacy for everyone.” Statistical Journal of the IAOS 1–14. https://doi.org/10.3233/sji‐220941
Springer Nature. 2024. Artificial Intelligence (AI). Springer Nature. https://www.nature.com/nature‐portfolio/editorial‐policies/ai
Stanford University. 2023. Generative AI Policy Guidance. Stanford University. https://communitystandards.stanford.edu/generative‐ai‐policy‐guidance
Swiss Federal Institute of Technology Zurich. 2024. Plagiarism and generative Artificial Intelligence (genAI). Swiss Federal Institute of Technology Zurich. https://library.ethz.ch/en/researching‐and‐publishing/scientific‐writing‐at‐eth‐zurich/plagiat‐und‐kuenstliche‐intelligenz‐ki.html
Taylor & Francis. 2024. Taylor & Francis Editorial Policies on Authorship. Taylor & Francis. https://authorservices.taylorandfrancis.com/editorial‐policies/defining‐authorship‐research‐paper/
The Association for Computing Machinery. 2023. ACM Policy on Authorship. The Association for Computing Machinery. https://www.acm.org/publications/policies/new‐acm‐policy‐on‐authorship
The Chicago Manual of Style 18th edition text. 2024. Citation, Documentation of Sources. The University of Chicago. https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html
The Institute of Electrical and Electronics Engineers. 2024. Submission Policies. The Institute of Electrical and Electronics Engineers. https://conferences.ieeeauthorcenter.ieee.org/author‐ethics/guidelines‐and‐policies/submission‐policies/
The Open University of Cyprus. 2024. Guidelines For Research Staff Based On Internal Policy On Generative Artificial Intelligence. The Open University of Cyprus. https://www.ouc.ac.cy/index.php/en/university/legislation‐regulations/72‐internal‐policy‐on‐generative‐artificial‐intelligence‐research‐staff/viewdocument/72
The Queensland University of Technology. 2024. Harvard Examples—Internet sources—Generative AI (e.g. ChatGPT). The Queensland University of Technology. https://www.citewrite.qut.edu.au/cite/examples/harvard/harvard_internet_ai.html
The Russell Group of Universities. 2023. New principles on use of AI in education. The Russell Group of Universities. https://russellgroup.ac.uk/news/new‐principles‐on‐use‐of‐ai‐in‐education/
The United Nations Educational Scientific and Cultural Organization. 2023. Guidance for generative AI in education and research. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000386693
The University of Melbourne. 2024. GenAI and student academic integrity: Guidance for teaching staff on students’ use of genAI. Melbourne School of Design, The University of Melbourne. https://msd.unimelb.edu.au/belt/quality/genai#genai‐and‐student‐academic‐integrity
The University of Tokyo. 2023. Policy on the use of AI tools in classes. The University of Tokyo. https://utelecon.adm.u‐tokyo.ac.jp/en/docs/ai‐tools‐in‐classes
The University of Western Australia. 2024. Referencing style—Vancouver (based on Citing Medicine): Generative Artificial Intelligence (AI). The University of Western Australia. https://guides.library.uwa.edu.au/vancouver/Gen_AI
Thiga, M. M. 2023. “Large language models in academic publishing.” In Facilitating Global Collaboration and Knowledge Sharing in Higher Education With Generative AI. https://doi.org/10.4018/9798369304877.ch009
Tubino, L., and C. Adachi. 2022. “Developing feedback literacy capabilities through an AI automated feedback tool.” ASCILITE Publications e22039. https://doi.org/10.14742/apubs.2022.39
Universiti Malaysia Pahang. 2023. Guidelines For Using Generative Artificial Intelligence At Universiti Malaysia Pahang Al‐Sultan Abdullah. Universiti Malaysia Pahang. https://caic.umpsa.edu.my/media/attachments/2024/05/21/ai‐bi‐garis‐panduan‐kecerdasan‐buatan‐generatif_translation‐bi.pdf
University College Cork. 2024. Academic Integrity. University College Cork. https://www.ucc.ie/en/ethical‐use‐of‐generative‐ai‐toolkit/academic‐integrity/
University College Dublin. 2024. Generative AI: FAQs. University College Dublin. https://www.ucd.ie/artshumanities/study/aifutures/generativeaifaqs/
University College London. 2024. Generative AI as a source of information. University College London.
University of Cambridge. 2024. How we use generative AI tools. University of Cambridge. https://www.communications.cam.ac.uk/generative‐ai‐tool‐guidelines
University of Cape Town. 2023. Senate Ethics in Research Committee (EiRC) Guidelines and recommendations for the use of generative artificial intelligence (AI) tools in research. University of Cape Town. https://uct.ac.za/sites/default/files/media/documents/uct_ac_za/87/EiRC_GenerativeAI_guideline_Oct2023_final.pdf
University of Geneva. 2024. Statement on artificial intelligence. University of Geneva. https://www.unige.ch/en/university/politique‐generale/statement‐ai/
University of Gent. 2024. Generative AI in Ghent University Education: Impact and Approach. University of Gent. https://onderwijstips.prd.rad.ugent.be/en/tips/chatgpt‐een‐generatief‐ai‐systeem‐met‐impact‐op‐he/
University of Glasgow. 2024. Generative AI Guidance for Researchers. University of Glasgow. https://www.gla.ac.uk/research/strategy/ourpolicies/ai‐for‐researchers/#researchandacademicintegrity,generativeaitools%3Arisksandlimitations,usingaitoolstocheckyourwriting,howtocite%2Facknowledgeusageofgenerativeaitoolsinyourwork
University of Groningen. 2024. AI and copyright. University of Groningen. https://edusupport.rug.nl/2444623900
University of Helsinki. 2024. Generative AI at the University. University of Helsinki. https://helpdesk.it.helsinki.fi/en/help/20064
University of Hong Kong. 2024. AI Literacy: Home. University of Hong Kong Libraries. https://libguides.lib.hku.hk/AI‐literacy/Home
University of Oslo. 2024. Guidelines for AI‐generated content on UiO's digital channels. University of Oslo. https://www.uio.no/english/for‐employees/support/profile/ai/
University of Oxford. 2024. Guidelines on the use of generative AI. University of Oxford. https://communications.admin.ox.ac.uk/communications‐resources/ai‐guidance#collapse4654416
University of Queensland. 2024. GenAI in research. University of Queensland. https://libguides.library.qut.edu.au/c.php?g=958007&p=7065056
University of Tartu. 2024. University of Tartu guidelines for using AI chatbots for teaching and studies. University of Tartu. https://ut.ee/en/node/151731
University of Toronto. 2024. Generative AI tools and Copyright Considerations. University of Toronto. https://onesearch.library.utoronto.ca/copyright/generative‐ai‐tools‐and‐copyright‐considerations
US National Science Foundation. 2023. Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process. US National Science Foundation. https://new.nsf.gov/news/notice‐to‐the‐research‐community‐on‐ai?utm_medium=email&utm_source=govdelivery
van Nuenen, T., X. Ferrer, J. M. Such, and M. Cote. 2020. “Transparency for Whom? Assessing Discriminatory Artificial Intelligence.” Computer 53(11): 36–44. https://doi.org/10.1109/MC.2020.3002181
Victoria University. 2024. IEEE Referencing: Generative AI. Victoria University. https://libraryguides.vu.edu.au/ieeereferencing/generativeAI#s‐lg‐box‐wrapper‐26255475
Vilnius University. 2024. The Guidelines on Artificial Intelligence Usage at Vilnius University. Vilnius University. https://www.vu.lt/site_files/Vertimai/EN_Translation_Dirbtinio_intelekto_naudojimo_Vilniaus_universitete_gair%C4%97s.pdf
Wang, J., S. Wu, Q. Guo, H. Lan, E. Janne, L. Wang, J. Zhang, et al. 2021. “Investigation and evaluation of randomized controlled trials for interventions involving artificial intelligence.” Intelligent Medicine 1(2): 61–69. https://doi.org/10.1016/j.imed.2021.04.006
Wiley. 2024. Best Practice Guidelines on Research Integrity and Publishing Ethics. Wiley. https://authorservices.wiley.com/ethics‐guidelines/index.html
Xiao, J. H., A. Bozkurt, M. Nichols, A. Pazurek, C. M. Stracke, J. Y. H. Bai, R. Farrow, et al. 2025. “Venturing into the unknown: critical insights into grey areas and pioneering future directions in educational generative AI research.” TECHTRENDS 69: 582–597. https://doi.org/10.1007/s11528‐025‐01060‐6
Yan, L., L. Sha, L. Zhao, Y. Li, R. Martinez‐Maldonado, G. Chen, X. Li, Y. Jin, and D. Gašević. 2024. “Practical and ethical challenges of large language models in education: A systematic scoping review.” British Journal of Educational Technology 55(1): 90–112. https://doi.org/10.1111/bjet.13370
Zhou, Z., X. Ji, J. Zhang, Z. Zhao, X. Hei, K. K. Raymond Choo. (2025). Ethical Considerations and Policy Implications for Large Language Models: Guiding Responsible Development and Deployment. In: Hei, X., Garcia, L., Kim, T., Kim, K. (eds) Security and Privacy in Cyber‐Physical Systems and Smart Vehicles. SmartSP 2024. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 622. Springer, Cham. https://doi.org/10.1007/978‐3‐031‐93354‐7_15
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Generative AI's growing use in higher education research requires strong protocols for responsible use. This need arises from the potential for misuse and the current uncertainty around ethical concerns and intellectual property. The lack of clear rules about openness in AI use, along with the “black box” nature of many AI systems, raises worries about reproducibility and the possibility of biased or fake results. This paper focuses specifically on generative AI tools (e.g., LLMs like ChatGPT, research‐specific platforms like Elicit/SciSpace). The paper presents the ETHICAL protocol (i.e., Establish your purpose, Thoroughly explore options, Harness the appropriate tool, Inspect and verify output, Cite and reference accurately, Acknowledge AI usage transparently, and Look over publisher's guidelines), a detailed guide designed to direct researchers in the ethical and responsible inclusion of generative AI in their work. The protocol was created through a multi‐step process, including a scientometric review of current trends, a systematic review of researcher experiences, and a policy analysis of 74 documents from various stakeholders (authorities, universities, publishers, and publication manuals). This analysis shaped the creation of a seven‐heading, nine‐item checklist covering key aspects of responsible AI use, from setting clear research goals to checking outputs and openly acknowledging AI help. The ETHICAL protocol gives practical examples and detailed explanations for each item, highlighting the importance of AI literacy and careful choice of suitable tools. It also stresses the vital need for checking AI‐generated content to lessen the risk of errors and made‐up information (“hallucinations”). The resulting protocol offers a practical and easy‐to‐use guide for researchers, encouraging responsible AI practices and upholding academic integrity. The ETHICAL protocol offers a helpful tool for managing the complex area of AI in research, ultimately leading to more open, reliable, and ethically sound scholarly work. Its broad acceptance could greatly improve the responsible use of AI in higher education, building trust and furthering knowledge growth.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Department of Educational Sciences, College of Education, Qatar University, Doha, Qatar, Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
2 Department of Educational Sciences, College of Education, Qatar University, Doha, Qatar
3 Educational Research Centre, College of Education, Qatar University, Doha, Qatar
4 Department of Chemical Engineering, Qatar University, Doha, Qatar





