AI-assisted language tools (such as ChatGPT) have gained widespread attention recently, particularly for their capability to assist in drafting scientific papers. While these tools have the potential to enhance the efficiency and speed of academic and technical writing, the ethics and best practices for their use are still evolving. These tools may generate useful information and content but are also prone to errors and inconsistencies.
The SPE Board has approved a new policy for authors who use AI language tools to generate content for their papers. The policy states that AI-generated content may be used within SPE publications but under specific conditions.
- AI language tools may not be listed as an author. The AI tool cannot sign publishing agreements or transfers of copyright.
- Any AI-generated content that is used within a manuscript should be thoroughly vetted, fact checked, and disclosed.
- If AI language tools are used within a manuscript, their use should be clearly explained within the methodology or acknowledgement section of the paper. If AI-generated content is included within a manuscript without an explanation, this can be grounds for rejection of the work at the discretion of SPE and may result in a code of conduct review.
- The authors of the manuscript will be held responsible for any errors, inconsistencies, incorrect references, plagiarism, or misleading content included from the AI tool.
It is important to note that technology for AI language tools is advancing rapidly. SPE plans to periodically review and update this policy to ensure its relevance and effectiveness. Any modifications to the policy will be communicated transparently and in a timely manner.