Top stories






More news






Marketing & Media
#YouthMonth: BET Software champions young minds with Robotics Clubs





Construction & Engineering
#YouthMonth: Meet the young architect shaping a sustainable future in South Africa















This matters not just for academics, but for anyone relying on trustworthy information, from journalists and policymakers to educators and the public. Ensuring transparency in how AI is used protects the credibility of all published knowledge.
In education and research, AI can generate text, improve writing style, and even analyse data. It saves time and resources by allowing quick summarising of work, language editing and reference checking. It also holds potential for enhancing scholarly work and even inspiring new ideas.
Equally AI is able to generate entire pieces of work. Sometimes it’s difficult to distinguish original work written by an individual and work generated by AI.
This is a serious concern in the academic world – for universities, researchers, lecturers and students. Some uses of AI are seen as acceptable and others are not (or not yet).
As editor and editorial board member of several journals, and in my capacity as a researcher and professor of psychology, I have grappled with what counts as acceptable use of AI in academic writing. I looked to various published guidelines:
The guidelines are unanimous that AI tools cannot be listed as co-authors or take responsibility for the content. Authors remain fully responsible for verifying the accuracy, ethical use and integrity of all AI-influenced content. Routine assistance does not need citation, but any substantive AI-generated content must be clearly referenced.
Let’s unpack this a bit more.
In understanding AI use in academic writing, it’s important to distinguish between AI-assisted content and AI-generated content.
AI-assisted content refers to work that is predominantly written by an individual but has been improved with the aid of AI tools. For example, an author might use AI to assist with grammar checks, enhance sentence clarity, or provide style suggestions. The author remains in control, and the AI merely acts as a tool to polish the final product.
This kind of assistance is generally accepted by most publishers as well as the Committee on Publication Ethics, without the need for formal disclosure. That’s as long as the work remains original and the integrity of the research is upheld.
AI-generated content is produced by the AI itself. This could mean that the AI tool generates significant portions of text, or even entire sections, based on detailed instructions (prompts) provided by the author.
This raises ethical concerns, especially regarding originality, accuracy and authorship. Generative AI draws its content from various sources such as web scraping, public datasets, code repositories and user-generated content – basically any content that it is able to access. You can never be sure about the authenticity of the work. AI “hallucinations” are common. Generative AI might be plagiarising someone else’s work or infringing on copyright and you won’t know.
Thus, for AI-generated content, authors are required to make clear and explicit disclosures. In many cases, this type of content may face restrictions. Publishers may even reject it outright, as outlined in the Committee on Publication Ethics guidelines.
Based on my readings of the guidelines, I offer some practical tips for using AI in academic writing. These are fairly simple and could be applicable across disciplines.
AI tools can undoubtedly enhance the academic writing process, but their use must be approached with transparency, caution, and respect for ethical standards.
Authors must remain vigilant in maintaining academic integrity, particularly when AI is involved. Authors should verify the accuracy and appropriateness of AI-generated content, ensuring that it doesn’t compromise the originality or validity of their work.
There have been excellent suggestions as to when the declaration of AI should be mandatory, optional and unnecessary. If unsure, the best advice would be to include the use of any form of AI (assisted or generated) in the acknowledgement.
It is very likely that these recommendations will be revised in due course as AI continues to evolve. But it is equally important that we start somewhere. AI tools are here to stay. Let’s deal with it constructively and collaboratively.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The Conversation Africa is an independent source of news and views from the academic and research community. Its aim is to promote better understanding of current affairs and complex issues, and allow for a better quality of public discourse and conversation.
Go to: https://theconversation.com/africa