AI tools can be used for a variety of tasks in the scholarly publishing process, such as finding sources, analyzing data, and editing manuscripts. But which uses of AI in scholarly research and writing are acceptable?
Policies Regarding AI Usage in Scholarly Works
Many journal publishers have policies on how AI tools can be used in scholarly research and manuscript preparation. These policies often address issues such as:
- Authorship: Whether or not an AI tool can be an official author of a manuscript. Most policies confirm that only humans can be manuscript authors.
- Usage: The extent to which authors can use AI tools in scholarship and manuscript preparation. For example, some journal publishers allow authors to use AI tools as an aid to writing (e.g., improving the syntax and readability of sentences) but not as a replacement for the author’s own writing (e.g., drafting sections of the manuscript).
- Disclosure: The extent to which authors must disclose their use of AI tools for their research analysis and manuscript preparation. For example, many publishers require that certain uses of AI be documented in the methods section of the manuscript.
- Ethics: The ethical and responsible use of AI tools for scholarship. This involves issues such as the following: refraining from sharing private or confidential information in AI tools; refraining from uploading copyrighted content to AI tools without permission; and refraining from reproducing biased and false information generated by AI tools.
- Peer Review: Whether peer reviewers can use AI tools in the review process. Many publishers insist that peer review is a human process and that review manuscripts should not be uploaded into AI tools.
Many journal publishers use AI technologies to assist with various aspects of the manuscript review process such as:
- screening submitted manuscripts to ensure that they fit the journal’s scope and have been formatted correctly
- detecting previously published (or parts of previously published) manuscripts
- detecting AI-generated text in manuscripts
- detecting conflicts of interest
- assessing the appropriate use of statistical methods
- selecting peer reviewers
- checking and correcting references and links
- generating accessory content, like article summary points
- monitoring manuscript impact post-publication
Here are some examples of AI policies from publishers:
Other resources:
Copyright Considerations When Using AI
One thing to consider when working with AI-generated material is who owns the copyright to it, if anyone. The U.S. Copyright Office has stated that computers can’t own copyright, and thus anything produced by AI is in the public domain.
However, anything added to AI-generated content by a human can be protected. This can be as simple as how you arrange the material. For example, an author of a graphic novel used AI to create all the images and applied for copyright protections. Although the federal government stated that the individual images were not protected, how they were arranged in the novel as a whole could be.
Many creative professionals have expressed concern about their works, which are protected by copyright, being used to train the models used for many of the up and coming AI tools and have filed lawsuits. In general, the idea is that AI tools are not supposed to recreate exact works they trained on. At least some AI companies have also put in place parameters on the tools to prevent them from recreating existing works. However, there’s no guarantee these limiters always work, so if an AI tool generates something you think is suspiciously close to a work still in copyright, you might want to think twice about using it.