Artificial intelligence (AI) is a technology that is changing the landscape of higher education in terms of teaching and research. For research activities, policies and guidelines about the use of AI, specifically in terms of writing and reviewing manuscripts, papers and grant proposals, are evolving across federal agencies, journals and academic institutions. It is up to investigators, project staff and students to be aware of the applicable policies and guidelines surrounding the use of AI programs and tools, and question how reliable they are for use in the research environment.
Can AI be listed as an author?
No. There is a consensus amongst journals and research communities that AI models “cannot meet the requirements for authorship as they cannot take responsibility for submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.”
The concept of “responsibility” is more than just ownership, it is also accountability. Generative AI cannot be an author “because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.” Accountability is an essential element of authorship because it communicates liability and answerability for the work.
Can AI be used in writing and/or developing manuscripts?
Specific journals and research disciplines have different requirements concerning the use of AI in the writing process. In general, “authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used.” Authors are responsible for ensuring that AI-generated outputs are appropriate and accurate. “Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete or biased.”
According to ICMJE standards, authors should take steps to avoid plagiarism in AI-generated text and images. Any quoted material should be appropriately cited and attributed. In general, the AI model should not be cited as the author of the quoted text. For example, when using the AI model ChatGPT, the cited author should be listed as the author of the model, OpenAI. Information on how to cite AI when using the American Psychological Association (APA) style can be found here.
When considering the use of generative AI in scientific writing, users must accept responsibility and accountability for the content produced by such tools. As indicated above, generative AI tools cannot be responsible or accountable. Because generative AI has been found to plagiarize and fabricate material, authors who rely upon AI-generated material without confirming the accuracy of the information will open themselves up to findings of academic and research misconduct should fabrication, falsification or plagiarism be contained within those AI materials. Accuracy and integrity in scientific work remain the researcher’s responsibility, for which they are accountable.
Can AI be used in writing grant applications?
Many of the concerns that exist when using AI for writing/developing manuscripts (see above) also apply to writing grant applications. Grant applications are assumed to represent the original and accurate ideas of the applicant institution and researchers. However, because AI tools have the potential to introduce plagiarized, falsified, and fabricated content, grant applicants should be cautious of any AI-produced content and are warned that funding agencies will hold applicants accountable for any plagiarized, falsified and fabricated material.
Can AI be used in the peer review process?
The National Institutes of Health (NIH) has prohibited “scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals.” Utilizing AI in the peer review process is a breach of confidentiality because these tools “have no guarantee of where data are being sent, saved, viewed or used in the future.” Using AI tools to help draft a critique or to assist with improving the grammar and syntax of a critique draft are both considered breaches of confidentiality.
How should AI use be reported in my research?
Rigor and reproducibility standards are often established by specific journals and research disciplines. Transparent and complete reporting of methodology and materials used is crucial in promoting reproducibility and replicability. The Association of the Advancement of Artificial Intelligence has a helpful reproducibility checklist that can be found here.
References and recommended reading
“Authorship and AI Tools”
“Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals”
“Using AI in peer review is a breach of confidentiality”
- Click here to read.
- Visit the National Institutes of Health | Office of Extramural Research website.
“The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process”
- Click here to read.
- Visit the National Institutes of Health | Office of Extramural Research website.
“Tools such as ChatGPT threaten transparent science; here are our ground rules for their use”
“Chatbots, ChatGPT, and scholarly manuscripts: WAME recommendations on Chatbots and generative artificial intelligence in relation to scholarly publications”
“Nonhuman ‘authors’ and implications for the integrity of scientific publication and medical knowledge”
“Using AI to write scholarly publications”
MEDIA & PR CONTACTS
-
Xoel Cardenas
Senior Communications Specialist, Office of the Vice President of Research
385-495-7133 xoel.cardenas@utah.edu