Implications and editorial policies of artificial intelligence

Keywords: artificial intelligence, large language models, chatbots, scientific research, scientific writing, scientific publishing, editorial policies, guidelines, research ethics, research integrity

Abstract

In this entry of the School of editors section, I present some of the issues surrounding the use of artificial intelligence (AI) in research to frame the synthesis and analysis of ten editorial policies on the use of AI: arXiv, Elsevier, Emerald Publishing, International Committee of Medical Journal Editors, Oxford University Press, Sage, Springer Nature, Taylor & Francis Group, Wiley and World Association of Medical Editors. I address how large language models, such as ChatGPT, have affected the integrity and originality of scientific products, examining issues such as AI training, its use in scientific authoring, the accuracy of AI-generated content, and the impact these technologies are having on scientific research and publishing. Current editorial policies on the use of AI emphasize the need to act with transparency if they are used, in addition to stressing authors’ responsibilities and the ethical and practical limitations of using AI in research. I conclude with a reflection about seeking a balance between the advantages and limitations of AI in order to be able to use it in scientific research and publication without compromising ethics and integrity values in science.

Author Biography

Juan D. Machin-Mastromatteo, Universidad Autónoma de Chihuahua, México

Juan D. Machin Mastromatteo is a professor at the Universidad Autónoma de Chihuahua (UACH) and a member of the National System of Researchers (Level II). D. in Information and Communication Sciences (Tallinn University), Master in Digital Libraries and Learning (Oslo University College; Tallinn University; and Parma University) and Bachelor in Library Science (Universidad Central de Venezuela). Specialist in information literacy, action research, bibliometrics, open access and digital libraries. He has more than 120 scientific publications. He has facilitated more than 50 courses and has participated in more than 100 international events as speaker, panelist, organizer or moderator. He is Associate Editor of the journals Information Development (SAGE) and Revista Estudios de la Información (UACH). Member of the editorial boards of The Journal of Academic Librarianship (Elsevier) and IE Revista de Investigación Educativa (Red de Investigadores Educativos de Chihuahua). In Information Development he published, from 2015 to 2020, the column Desarrollando América Latina. In 2019 he created the Juantífico Project. From 2022 he is co-host of InfoTecarios podcast. Since 2023 he has published the section Escuela de editores in the journal Estudios de la Información.

References

arXiv. (2023). arXiv announces new policy on ChatGPT and similar tools. https://blog.arxiv.org/2023/01/31/arxiv-announces-new-policy-on-chatgpt-and-similar-tools

Committee on Publication Ethics. (2023a). Artificial intelligence (AI) and fake papers. https://publicationethics.org/resources/forum-discussions/artificial-intelligence-fake-paper

Committee on Publication Ethics. (2023b). Authorship and AI tools: COPE position statement. https://publicationethics.org/cope-position-statements/ai-author

Elsevier. (2023a). Guide for authors. Journal of Biotechnology. https://www.elsevier.com/journals/journal-of-biotechnology/0168-1656/guide-for-authors

Elsevier. (2023b). Publishing ethics. https://beta.elsevier.com/about/policies-and-standards/publishing-ethics

Elsevier. (2023c). The use of generative AI and AI-assisted technologies in writing for Elsevier. https://beta.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier

Emerald Publishing. (2023). Publishing ethics: Find out more about publication ethics and our policies. https://www.emeraldgrouppublishing.com/publish-with-us/ethics-integrity/research-publishing-ethics

Heikkilä, M. (2023). Why detecting AI-generated text is so difficult (and what to do about it). MIT Technology Review. https://www.technologyreview.com/2023/02/07/1067928/why-detecting-ai-generated-text-is-so-difficult-and-what-to-do-about-it

International Committee of Medical Journal Editors. (2023). Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. https://www.icmje.org/icmje-recommendations.pdf

Kung, T., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., y Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health, 2(2), e0000198. https://doi.org/10.1371/journal.pdig.0000198

Kung, T., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., y Tseng, V. (2022). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. MedRxiv. https://doi.org/10.1101/2022.12.19.22283643

Machin-Mastromatteo, J. D. [Juantífico]. (20 de abril de 2022). ¿Spam en Google Académico? [Video]. YouTube. https://youtu.be/1lN1R0aV4BU

Orduña Malea, E. [@eomalea]. (24 de abril de 2023). I'm afraid to say that several preprint servers are publishing online papers, which cite publications co-authored by me that do not exist. This is the @chatgptimpact. Google Scholar and ResearchGate are indexing those papers, and their fake citations, by the way. [Tweet]. Twitter. https://twitter.com/eomalea/status/1650527418577309699

Oxford University Press. (2023). Ethics. Oxford Academic. https://academic.oup.com/pages/authoring/journals/preparing_your_manuscript/ethics

Sage. (2023). ChatGPT and generative AI: Use of large language models and generative AI tools in writing your submission. https://us.Sagepub.com/en-us/nam/chatgpt-and-generative-ai

Spinak, E. (2023). ¿Es que la Inteligencia Artificial tiene alucinaciones? SciELO en Perspectiva. https://blog.scielo.org/es/2023/12/20/es-que-la-inteligencia-artificial-tiene-alucinaciones

Springer Nature. (2023). Artificial intelligence (AI). Nature Portfolio. https://www.nature.com/nature-portfolio/editorial-policies/ai

Tang, G., y Eaton, S. (2023). A rapid investigation of artificial intelligence generated content footprints in scholarly publications. Research Square. https://doi.org/10.21203/rs.3.rs-3253789/v1

Taylor & Francis Group. (2023). Defining authorship in your research paper: Co-authors, corresponding authors, and affiliations. Author Services. https://authorservices.taylorandfrancis.com/editorial-policies/defining-authorship-research-paper

Tsai, C.., Yeh, Y., Tsai, L., y Chou, E. (2023). The efficacy of transvaginal ultrasound-guided BoNT-A external sphincter injection in female patients with underactive bladder. Toxins, 15(3), 199. http://doi.org/10.3390/toxins15030199

Villegas-Ceballos, S. [Santiago Villegas-Ceballos]. (2 de diciembre de 2023). Inteligencia Artificial en Bibliotecas - Introducción 2023-12 [Video]. YouTube. https://www.youtube.com/watch?v=_klpXNc7vKw

Wiley. (2023). Best practice guidelines on research integrity and publishing ethics. https://authorservices.wiley.com/ethics-guidelines/index.html

World Association of Medical Editors. (2023). Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. https://wame.org/page3.php?id=106

Published
2023-12-15
How to Cite
Machin-Mastromatteo, J. D. (2023). Implications and editorial policies of artificial intelligence . Revista Estudios De La Información, 1(2), 123-133. https://doi.org/10.54167/rei.v1i2.1448
Section
School of Editors

Most read articles by the same author(s)