There’s no denying the impact of generative AI on content creation.
Different professionals have had varied reactions to generative AI. I recently took part in a debate on Reddit with individuals from both educational and legal backgrounds. Each person expressed strong views on whether ChatGPT is a benefit or a threat to their respective fields.
From an educational standpoint, relying on a large language model to write essays, provide summaries, or form opinions on others’ work is cause for concern. Content generated by ChatGPT may not reflect the user’s own thinking and could be considered a form of academic dishonesty. On this point, I’m in full agreement with the educators.
Likewise, in the context of legal documentation, using a language model to draft contracts, produce terms and conditions, or generate other content raises important questions around ownership, accuracy, and its suitability for commercial use.
However, I do see value in using generative AI as an editing tool, removing the need for a dedicated editor.
As a cost- and time-saving measure, I follow this process:
- I write the content and title myself.
- I request ChatGPT to “improve” the content, paying attention to the phrasing of the prompt.
- ‘Improve the spelling, grammar and structure of this but retain the original content and tone’
- I then edit ChatGPT’s response as it sometimes goes a bit off course on tone or adds words I might not use myself.
This approach allows me to generate content that retains my original ideas and concepts while benefiting from corrections in spelling, grammar, and structure. In my experience, this method does not introduce new ideas, alter my opinions, or deviate from the original message. It is akin to working with an editor.
The question remains: is this approach considered cheating?
Pingback: Maximising Productivity with Large Language Models: Content Enhancement and Coding Assistance - The Automation Advocate - Richard Fishburn