Content area
This study analysed the acknowledgment of ChatGPT in 1,759 academic publications indexed in Scopus and Web of Science up to August 2024. Around 80% of acknowledgments were related to text editing and proofreading, while only 5.3% mentioned ChatGPT for non-editorial research support, such as data analysis or programming. A small portion (3.5%) of researchers acknowledged ChatGPT for drafting sections of manuscripts. About two-thirds of corresponding authors who acknowledged ChatGPT were from non-English-speaking countries, and 75% of the publications with ChatGPT acknowledgments were published within January to August 2024. These findings suggest that ChatGPT was primarily acknowledged for language enhancement rather than more complex research applications, although some researchers may not have found it necessary to mention its use in their publications, highlighting the need for transparency from journals and publishers.
Introduction
Large Language Models (LLMs) like ChatGPT can help with scientific writing and manuscript production such as editing and proofreading (Lechien et al., 2024), writing abstracts (Gao et al., 2023; Hwang et al., 2024), literature reviews (Kacena et al., 2024; Margetts et al., 2024), coding (Bucaioni et al., 2024; Coello et al., 2024), statistical analysis (Huang et al., 2024), graphical (Wu et al., 2023; Zheng et al., 2023) and generating research ideas, such as hypotheses (Park et al., 2024) or research topics (Rashidov, 2024). Nevertheless, the use of LLMs in academic publishing brings ethical concerns (e.g., Guleria et al., 2023; Jarrah et al., 2023; Lund et al., 2023). This is because current LLMs may produce incorrect information (e.g., Altmäe et al., 2023; Buholayka et al., 2023; Kim, 2024; Walters & Wilder, 2023) or flawed AI-generated experimental images (Zhu et al., 2024). Authors should therefore take full responsibility for any AI use in academic publishing. Moreover, peer reviewing AI-generated content could be challenging. For example, one study found that reviewers detected only 39% of AI-generated content (Casal & Kessler, 2023). Hence, many academic publishers now require authors to clarify the use of AI in their publications (Lin, 2024). For instance, a study of 300 academic journals found that 59% had policies about AI authorship and 96.6% allowed AI tools like ChatGPT to improve manuscript quality (Lund & Naheem, 2024).
There is also evidence that many researchers are using LLMs in their research. Elsevier’s survey of 2,284 researchers found that about third (31%) used generative AI for research activities and most (93%) believed that LLMs helped them with writing and reviewing (Elsevier, 2024). Another survey of 1,600 researchers showed that 47% found AI “very useful” for academic activities and 55% believed AI “saves scientists time and money” (Van Noorden & Perkel, 2023). From 456 urologists questioned, 82% used ChatGPT for brainstorming and 58% for writing (Eppler et al., 2024) and from 229 medical journal authors, a quarter (24%) used LLMs for rephrasing, proofreading, or translation (Salvagno et al., 2024). Nevertheless, 22% of 271 academics believed grammar correction with ChatGPT should be reported, while 52% thought rewriting should be mentioned in papers (Chemaya & Martin, 2024).
Several studies have estimated LLMs usage in academic publications by analysing specific keywords linked to AI-generated contents. An analysis of papers published in 2023 estimated that over 1% (about 60,000) probably included LLM-assisted writing by using the term “meticulously” (Gray, 2024). An analysis of abstracts for 950,965 papers (2020–2024) from arXiv, bioRxiv, and Nature journals also estimated that 17.5% in Computer Science, 4.9% in Mathematics, and 6.3% in Nature journals were AI-modified based on the four words most disproportionately used by LLMs compared to human authors: realm, intricate, showcasing, and pivotal (Liang et al., 2024b). In dental research, AI-assisted writing increased from 47.1 to 224.2 per 10,000 papers after the release of ChatGPT (Uribe & Maldupa, 2024). In medical science, a study of 14 million PubMed abstracts (2010–2024) estimated that 10% of 2024 abstracts used LLMs by using terms such as delves, showcasing and underscores with some 30% increase in some biomedical subfields (Kobak et al., 2024). A study of 987 Dimensions-indexed publications with ChatGPT, OpenAI or Generative AI related terms (2022–2023) found that about 20% acknowledged ChatGPT for content generation (e.g., a portion of the publication) and 33% for assistance (e.g., editing, literature review, proofreading, revising, or data analysis) (Raman, 2023).
LLMs are also being used for the peer review process. One study estimated that AI use in peer reviews rose to 10.6% for ICLR 2024 and 16.9% for EMNLP 2023 conference papers (Liang et al., 2024a). However, there has been debate about the reliability and biases of LLMs like ChatGPT as substitutes for human reviewers (e.g., Hosseini & Horbach, 2023; Tyser et al., 2024). For instance, one study showed that 78.5% of 720 comments from human reviewers had no match with ChatGPT’s reviews, showing that the current LLMs has a limited ability to do human peer review process of scientific research (Suleiman et al., 2024).
Despite the increasing use of ChatGPT in academic research and manuscript production, it is not fully known how it is formally acknowledged in academic publications. No previous studies have classified or analysed the content of ChatGPT acknowledgments across academic publications. This study fills this gap to provide a clearer understanding of the role of ChatGPT in academic publishing.
Methods
This study used both Scopus and Web of Science (WoS) to find ChatGPT acknowledgments in academic papers. Both databases were used to increase coverage of acknowledgments (Alvarez-Bornstein et al., 2017; Paul-Hus et al., 2016). For this, searches were conducted on 20 August 2024 to capture ChatGPT related terms in the funding texts of publications indexed in Scopus and Web of Science (WoS):
WoS query: FT = (ChatGPT OR "Chat-GPT*" OR "Chat GPT*" OR GPT-4* OR GPT-3* OR GPT3* OR GPT4*)
Scopus query: FUND-ALL (ChatGPT OR "Chat-GPT*" OR "Chat GPT*" OR GPT-4* OR GPT-3* OR GPT3* OR GPT4*)
A total of 1,759 publications with ChatGPT-related terms were identified and reviewed. Of these, 31% (538) and 25% (442) exclusively mentioned in Web of Science (WoS) and Scopus, respectively, while 44% (779) were in both databases. This suggests the importance of using both databases for a more comprehensive search of acknowledgements of academic publications.
Classification of ChatGPT acknowledgements
Each publication was manually reviewed to classify the reasons for mentioning or acknowledging ChatGPT in the funding texts into four main categories and related subcategories. These categories cover the various ways ChatGPT has been acknowledged, ranging from editing and improving the readability of manuscripts to supporting the research process and other contributions to the research.
Manuscript drafting and editing: This category includes acknowledgements where ChatGPT played a direct role in editing or writing the manuscript.
Text editing or proofreading: enhancing the manuscript’s quality through language correction, grammar or spelling checks, improving readability, text shortening, or suggesting titles or subheadings. Examples:
The authors used OpenAI ChatGPT to check grammar, spelling, and improve readability and language.
ChatGPT was used in parts of this manuscript to shorten the text.
Thanks to ChatGPT for polishing the language and grammar of the article.
ChatGPT was used to help develop the title of this article.
Text drafting or writing: drafting or writing specific section(s) of the manuscript rather than or in addition to editing or proofreading, such as the abstract, introduction, discussion, or conclusion or other content (e.g., glossary or definition).
The authors acknowledge that this article was partially generated by ChatGPT.
We used ChatGPT […] for assistance in generating some of the written content in the abstract and conclusion of this paper, which we copyedited for correctness.
The glossary definitions were generated by GPT-4 in May 2023 and edited by the authors.
The conclusion was created by GPT-4 on the basis of a summary also created by GPT-4.
Visual or graphical creation: creating visual or graphical content in the manuscript, such as graphical abstracts, figures, diagrams, table of contents (TOC) images, or other graphical contents:
The cover art was created using GPT-4 and DALL·E.
Some elements (the crystals growing in the beaker, the crystals, and the droplets) of the TOC graphic were generated using DALL·E 3, OpenAI’s text to image model.
ChatGPT-4 with DALL·E were used to create Fig. 1.
The image of a thrombus in a blood vessel in the graphical abstract was created with the assistance of DALL-E 2 Open AI.
Language translation: translating texts between languages, such as translating abstracts or manuscript texts:
We acknowledge the use of ChatGPT to initially translate our English abstract into Spanish and French.
ChatGPT-4 was used to assist with the translation and revision of text from Thai to English.
The ChatGPT tool aided in translating the original manuscript from Chinese to English.
We express our gratitude to OpenAI for providing ChatGPT-4, which was used to translate concepts from Romanian into English.
Research process support: various research processes not directly related to manuscript writing, editing or production.
Brainstorming or idea generation: a creative assistant, helping researchers to generate ideas or outlining research context:
During the preparation of this work, I used ChatGPT 3.5 to assist me with brainstorming, outlining, and proofreading.
GPT-3 was used to produce examples, brainstorm ideas, poems, and motivational speeches throughout the writing process.
ChatGPT was used to generate ideas for the outline of this article.
[…] ChatGPT and Gemini for help with brainstorming during our research work.
Programming or coding: assisting with programming, coding, or debugging tasks such as writing, refining, or troubleshooting code across various programming languages.
ChatGPT provided assistance with debugging Python code for data analysis.
We acknowledge ChatGPT for helping write MATLAB scripts used in computational modelling.
ChatGPT improved the R code used for analysis.
We acknowledge the use of ChatGPT to generate code snippets and help debug code issues.
Content generation or categorisation: generating research-related content, such as survey or interview questions/answers, categorising information to support the research process.
OpenAI (ChatGPT 3.5) to transcribe and summarize audio-recordings from the workshops.
We used the generative artificial intelligence tool ChatGPT […] to draft the chatbot responses for the research survey.
ChatGPT was used for automatic coding and checking the accuracy of the human coding process.
We used ChatGPT to draft a fabricated article and review.
Access and technical support: ChatGPT not explicitly mentioned in the manuscript production or research process but was recognised for providing access to ChatGPT (e.g., paid versions or models), collaborations with experts, or offering technical support.
Z.Z. would like to extend special gratitude to […] from OpenAI for inspiring discussions on harnessing the potential of GPT‐4.
The subscriptions to ChatGPT was supported by the Department of Pharmacology […].
The research staff at OpenAI for providing access to and technical support for fine-tuning GPT.
We thank […] for valuable discussions and support in running GPT-3 computations.
Non-use and other mentions: Author declaration that ChatGPT had not been used in the research or its role in the research was not unclear.
Non-AI (ChatGPT) use in research: The content or analysis for the manuscript was produced without AI assistance.
I certify that ChatGPT was not utilised to produce any technical content, and I accept full responsibility for the contents of the paper.
This article was written without the assistance of ChatGPT or other large language models.
The authors did not use the generative AI tool ChatGPT in any portion of the manuscript.
We did not use ChatGPT or other Large Language Models to generate our paper.
General or unclear mentions: ChatGPT mentioned in a general context without specifying its exact role or application in the research process or manuscript production.
We acknowledge the contribution of ChatGPT 4.0 and Bard (Google).
At present, there is a growing interest among users in exploring ChatGPT’s applications.
Systematic investigation of different parameters associated with GPT-3 has not been done in this study.
ChatGPT is a tool developed by OpenAI, a for-profit research organisation.
Results
Most ChatGPT-related acknowledgments in the academic publications examined (87%) were related to Manuscript Drafting and Editing, such as proofreading, drafting, text shortening, title suggestions, creating visuals, and translation. In contrast, only 5.3% were associated with Research Process Support, such as programming, brainstorming, content generation, or data analysis. Moreover, 3.2% of acknowledgments mentioned Access and Technical Support, while 3.9% were categorised as Non-use and Other Mentions.
Figure 1 provides more detail on the specific classifications of ChatGPT acknowledgments. While most (80%) acknowledgments were for Text Editing or Proofreading, 3.5% of researchers acknowledged ChatGPT for Text Drafting or Writing section(s) of manuscripts, such as abstract, introductions, discussions, conclusions, or other content with many clarifying they had reviewed the AI-generated text. Moreover, 1.6% of publications acknowledged ChatGPT for generating visual content, such as graphical abstracts and figures, with relatively higher use in Chemistry articles for TOC images (see Zheng et al., 2023). Despite the potential of large language models for programming, coding or debugging (Bucaioni et al., 2024; Coello et al., 2024), only 2% of acknowledgments noted its application in these areas. ChatGPT was also acknowledged for content generation (2.4%) helping the research process Fig. 1.
[See PDF for image]
Fig. 1
Types of ChatGPT acknowledgments in academic publications (n = 1,759)
Most acknowledgments were found in research articles (80%), followed by reviews (12%), conference papers (2.5%), and other editorial materials (5.5%). Moreover, only about 10% of publications with ChatGPT acknowledgements had ChatGPT-related terms in their titles, abstracts, or keywords, suggesting that most publications used ChatGPT beyond LLMs and AI contexts (e.g., “Chemical multiscale robotics for bacterial biofilm treatment”). About three-quarters (74.6%) of the publications with ChatGPT acknowledgments or mentions in the funding text were published in 2024, although the first version of ChatGPT (3.5) was made public in November 2022. This suggests that there has been a rapid emergence of publications acknowledging or using ChatGPT within a relatively short time. About 50% and 77% of the publications with ChatGPT acknowledgments (excluding Non-use and unclear mentions) were from 10 WoS and OECD1 subject categories, respectively (Fig. 2).
[See PDF for image]
Fig. 2
The 10 WoS (left) and OECD (right) subjects with the most ChatGPT manuscript acknowledgments
Out of 1,476 English-language publications with identified ChatGPT acknowledgments for manuscript drafting and editing, the country affiliation of the corresponding authors of about 75% were from non-English-speaking countries. Presumably the main use is to assist enhancing the clarity and language of academic writing in English. Figure 3 shows the 15 countries with the most ChatGPT acknowledgments for manuscript editing and drafting.
The megajournal Scientific Reports had the highest number of identified acknowledgments to ChatGPT (26), followed by iScience (23) and Science of the Total Environment (19) (Fig. 4). Other than being large, one reason for finding acknowledgments from these journals could be their explicit policies regarding the documentation of Large Language Model (LLM) usage (for analysis of publishers AI policy see: Lin, 2024). For instance, Scientific Reports2 requires authors to clarify the use of LLMs like ChatGPT. Similarly, iScience3 and Cell Reports4 from Cell Press also requests authors to declare the use of generative AI in scientific writing. Elsevier journals, including Science of the Total Environment, also have policies requiring authors to disclose the use of generative AI technologies. Springer Nature’s5 policies include using LLMs like ChatGPT for the method section, editing, reviewing, and generating images (e.g., “AI-generated images and videos remain broadly unresolved, Springer Nature journals are unable to permit its use for publication”).
[See PDF for image]
Fig. 3
Country affiliation of the corresponding authors for the 15 countries with the most ChatGPT acknowledgments for manuscript drafting and editing
[See PDF for image]
Fig. 4
Top 10 journals with most identified acknowledgments or mentions to ChatGPT
Conclusion and discussion
This study found that 80% of ChatGPT acknowledgments were related to editing and proofreading, supporting a previous survey showing that most academics (93%) believed that LLMs helped them with writing and reviewing (Elsevier, 2024) but is much higher than the 33% found from a study of 987 Dimensions indexed publications (Raman, 2023), presumably because the latter study used earlier data, with ChatGPT use growing subsequently. Moreover, the 1,759 acknowledgments to ChatGPT found here is much lower than the 60,000 estimated using AI-related keywords for 2023 publications (Gray, 2024). In this study, only 8% of acknowledgments (143 out of 1,759) were related to the Computer Science or Artificial Intelligence WoS subjects, which is much lower than the 17.5% of LLM-modified content found in Computer Science papers from the arXiv, bioRxiv, and Nature journals (Liang et al., 2024b). In the current study, about 26% (458) of the acknowledgments to ChatGPT were related to medical or health sciences, whereas another study estimated that 10% of 2024 abstracts from PubMed used LLMs (Kobak et al., 2024). Similarly, only 3 publications with ChatGPT acknowledgments in Dentistry, Oral Surgery & Medicine were found, which is significantly lower than the estimated AI-assisted writing in dental publications (Uribe & Maldupa, 2024). These results suggest that many researchers using LLMs like ChatGPT for manuscript editing or drafting are not acknowledging their use. One reason could be that some authors feel that tasks like editing, language polishing, or proofreading do not require to be reported (Chemaya & Martin, 2024). However, in the future it is likely that academics will report more AI usage in their papers, as this study found that about three-quarters (74.6%) of ChatGPT acknowledgments came from papers published in 2024 alone.
The finding that 75% of corresponding authors who acknowledged ChatGPT for manuscript editing were from non-English-speaking countries suggests that some of these researchers use LLMs tools like ChatGPT to help overcome language barriers, since there is evidence that they may need more time and effort for reading and writing papers and compared to native English speakers (Amano et al., 2023). This can be further investigated to understand the extent to which LLMs are being used to reduce language-related barriers and how this affects the overall quality, efficiency, and speed of academic publishing across different subjects and countries.
This study found a significant rise in the acknowledgment of ChatGPT in academic publications between January and August 2024 (74.6% of all acknowledgments). Hence, it is expected that more acknowledgments of LLMs will appear in future publications, especially as many authors may not yet be fully aware of journal policies regarding the disclosure of AI use in their research, and some journals are still developing editorial policies on AI use in publications. The majority (80%) of these acknowledgments were for text editing and proofreading, while only 5.3% of researchers acknowledged using ChatGPT for Research Process Support, such as programming, brainstorming, content generation, or data analysis. These findings suggest that although LLMs like ChatGPT are becoming widespread in academic writing, their use remains primarily focused on language improvement rather than more complex tasks generative AI can perform.
Finally, some researchers might not consider it necessary to disclose their use of LLMs in their publications (Chemaya & Martin, 2024) or may not be aware of recent journal policies regarding AI use. Some acknowledgments might have also been missed in Scopus or WoS funding texts (Alvarez-Bornstein et al., 2017), which could lead to an underrepresentation of ChatGPT use in this study. Hence, the actual extent of ChatGPT use in academic research may be higher than reported here, but the findings still provide relevant observations for journal editors and publishers in shaping editorial policies regarding LLM use in academic publications.
Funding
No funding was provided for this study.
Data availability
The shared data provides a categorisation of the acknowledgments of ChatGPT in 1,759 academic publications indexed in Scopus and Web of Science up to 20 August 2024 by author. The data is available via https://doi.org/https://doi.org/10.6084/m9.figshare.27242670.v1.
Declarations
Competing interests
The author is a member of the Distinguished Reviewers Board of Scientometrics.
https://incites.zendesk.com/hc/en-gb/articles/22516984338321-OECD-Category-Schema.
2https://www.nature.com/srep/author-instructions/submission-guidelines.
3https://www.cell.com/iscience/authors.
4https://www.cell.com/cell-reports/authors.
5https://www.springer.com/gp/editorial-policies/artificial-intelligence--ai-/25428500.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Altmäe, S; Sola-Leyva, A; Salumets, A. Artificial intelligence in scientific writing: A friend or a foe?. Reproductive BioMedicine Online; 2023; 47,
Alvarez-Bornstein, B; Morillo, F; Bordons, M. Funding acknowledgments in the web of science: Completeness and accuracy of collected data. Scientometrics; 2017; 112,
Amano, T; Ramírez-Castañeda, V; Berdejo-Espinola, V; Borokini, I; Chowdhury, S; Golivets, M; González-Trujillo, JD; Montaño-Centellas, F; Paudel, K; White, RL; Veríssimo, D. The manifold costs of being a non-native English speaker in science. PLoS Biology; 2023; 21,
Bucaioni, A; Ekedahl, H; Helander, V; Nguyen, PT. Programming with ChatGPT: How far can we go?. Machine Learning with Applications; 2024; 15, 100526. [DOI: https://dx.doi.org/10.1016/j.mlwa.2024.100526]
Buholayka, M; Zouabi, R; Tadinada, A. The readiness of ChatGPT to write scientific case reports independently: A comparative evaluation between human and artificial intelligence. Cureus; 2023; 15,
Casal, JE; Kessler, M. Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing. Research Methods in Applied Linguistics; 2023; 2,
Chemaya, N; Martin, D. Perceptions and detection of AI use in manuscript preparation for academic journals. PLoS ONE; 2024; 19,
Coello, CEA; Alimam, MN; Kouatly, R. Effectiveness of ChatGPT in coding: A comparative analysis of popular large language models. Digital; 2024; 4,
Elsevier. (2024). Insights 2024: Attitudes toward AI – Full report. Elsevier. https://www.elsevier.com/insights/attitudes-toward-ai
Eppler, M; Ganjavi, C; Ramacciotti, LS; Piazza, P; Rodler, S; Checcucci, E; Gomez Rivas, J; Kowalewski, KF; Belenchón, IR; Puliatti, S; Taratkin, M; Veccia, A; Baekelandt, L; Teoh, JY; Somani, BK; Wroclawski, M; Abreu, A; Porpiglia, F; Gill, IS; Murphy, DG; Canes, D; Cacciamani, GE. Awareness and use of ChatGPT and large language models: A prospective cross-sectional global survey in urology. European Urology; 2024; 85,
Gao, CA; Howard, FM; Markov, NS; Dyer, EC; Ramesh, S; Luo, Y; Pearson, AT. Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. NPJ Digital Medicine; 2023; 6,
Gray, A. (2024). ChatGPT" contamination": estimating the prevalence of LLMs in the scholarly literature. arXiv preprint arXiv:2403.16887.
Guleria, A; Krishan, K; Sharma, V; Kanchan, T. ChatGPT: Ethical concerns and challenges in academics and research. The Journal of Infection in Developing Countries; 2023; 17,
Hosseini, M; Horbach, SP. Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Research Integrity and Peer Review; 2023; 8,
Huang, Y; Wu, R; He, J; Xiang, Y. Evaluating ChatGPT-4.0’s data analytic proficiency in epidemiological studies: A comparative analysis with SAS, SPSS, and R. Journal of Global Health; 2024; 14, 04070. [DOI: https://dx.doi.org/10.7189/jogh.14.04070]
Hwang, T; Aggarwal, N; Khan, PZ; Roberts, T; Mahmood, A; Griffiths, MM; Parsons, N; Khan, S. Can ChatGPT assist authors with abstract writing in medical journals? Evaluating the quality of scientific abstracts generated by ChatGPT and original abstracts. PLoS ONE; 2024; 19,
Jarrah, AM; Wardat, Y; Fidalgo, P. Using ChatGPT in academic writing is (not) a form of plagiarism: What does the literature say. Online Journal of Communication and Media Technologies; 2023; 13,
Kacena, MA; Plotkin, LI; Fehrenbacher, JC. The use of artificial intelligence in writing scientific review articles. Current Osteoporosis Reports; 2024; 22,
Kim, SJ. Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: A narrative review. Science Editing; 2024; 11,
Kobak, D., Márquez, R. G., Horvát, E. Á., & Lause, J. (2024). Delving into ChatGPT usage in academic writing through excess vocabulary. arXiv preprint arXiv:2406.07016.
Lechien, JR; Gorton, A; Robertson, J; Vaira, LA. Is ChatGPT-4 accurate in proofread a manuscript in otolaryngology-head and neck surgery?. Otolaryngology-Head and Neck Surgery; 2024; 170,
Liang, W., Zhang, Y., & Wu, Z. (2024). Mapping the increasing use of LLMs in scientific papers. arXiv. https://arxiv.org/pdf/2404.01268
Liang, W., Izzo, Z., Zhang, Y., Lepp, H., Cao, H., & Zhao, X. (2024). Monitoring AI-modified content at scale: A case study on the impact of ChatGPT on AI conference peer reviews. arXiv. https://arxiv.org/pdf/2403.07183
Lin, Z. Towards an AI policy framework in scholarly publishing. Journal of Scholarly Publishing; 2024; 25,
Lund, BD; Naheem, KT. Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals. Learned Publishing; 2024; 37,
Lund, BD; Wang, T; Mannuru, NR; Nie, B; Shimray, S; Wang, Z. ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology; 2023; 74,
Margetts, TJ; Karnik, SJ; Wang, HS; Plotkin, LI; Oblak, AL; Fehrenbacher, JC; Kacena, MA; Movila, A. Use of AI language engine ChatGPT 4.0 to write a scientific review article examining the intersection of Alzheimer’s disease and bone. Current Osteoporosis Reports; 2024; 22,
Park, YJ; Kaplan, D; Ren, Z; Hsu, CW; Li, C; Xu, H; Li, S; Li, J. Can ChatGPT be used to generate scientific hypotheses?. Journal of Materiomics; 2024; 10,
Paul-Hus, A; Desrochers, N; Costas, R. Characterization, description, and considerations for the use of funding acknowledgement data in Web of Science. Scientometrics; 2016; 108, pp. 167-182. [DOI: https://dx.doi.org/10.1007/s11192-016-1953-y]
Raman, R. Transparency in research: An analysis of ChatGPT usage acknowledgment by authors across disciplines and geographies. Accountability in Research; 2023; [DOI: https://dx.doi.org/10.1080/08989621.2023.2273377]
Rashidov, A. Expert algorithm to optimize the process of selecting a topic for a research project with the assistance of ChatGPT. 2024 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA); 2024; New York, IEEE: pp. 1-5.
Salvagno, M; Cassai, A; Zorzi, S; Zaccarelli, M; Pasetto, M; Sterchele, ED; Chumachenko, D; Gerli, AG; Azamfirei, R; Taccone, FS. The state of artificial intelligence in medical research: A survey of corresponding authors from top medical journals. PLoS ONE; 2024; 19,
Suleiman, A; von Wedel, D; Munoz-Acuna, R; Redaelli, S; Santarisi, A; Seibold, EL; Schaefer, MS. Assessing ChatGPT’s ability to emulate human reviewers in scientific research: A descriptive and qualitative approach. Computer Methods and Programs in Biomedicine; 2024; 254, 108313. [DOI: https://dx.doi.org/10.1016/j.cmpb.2024.108313]
Tyser, K., Segev, B., Longhitano, G., Zhang, X. Y., Meeks, Z., Lee, J., Garg, U., Belsten, N., Shporer, A., Udell, M., Te’eni, D., & Drori, I. (2024). AI-Driven review systems: evaluating LLMs in scalable and bias-aware academic reviews. arXiv preprint arXiv:2408.10365.
Uribe, SE; Maldupa, I. Estimating the use of ChatGPT in dental research publications. Journal of Dentistry; 2024; 149, 105275. [DOI: https://dx.doi.org/10.1016/j.jdent.2024.105275]
Van Noorden, R; Perkel, JM. AI and science: What 1,600 researchers think. Nature; 2023; 621,
Walters, WH; Wilder, EI. Fabrication and errors in the bibliographic citations generated by ChatGPT. Scientific Reports; 2023; 13,
Wu, C., Yin, S., Qi, W., Wang, X., Tang, Z., & Duan, N. (2023). Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671.
Zheng, Z., He, Z., Khattab, O., Rampal, N., Zaharia, M. A., Borgs, C., Chayes, J. T., & Yaghi, O. M. (2023). Image and data mining in reticular chemistry using GPT-4V. arXiv preprint arXiv:2312.05468.
Zhu, L; Lai, Y; Mou, W; Zhang, H; Lin, A; Qi, C; Yang, T; Xu, L; Zhang, J; Luo, P. ChatGPT’s ability to generate realistic experimental images poses a new challenge to academic integrity. Journal of Hematology & Oncology; 2024; 17,
© Akadémiai Kiadó, Budapest, Hungary 2024.