Artificial Intelligence (AI) and academic writing – irresistible force versus immovable object?
Introduction
Technology developer Open AI jolted the public consciousness in late December 2022 with the unveiling of ChatGPT, its AI application possessing the ability to ‘write’ convincing articles on virtually any subject.1 Numerous commentators have observed that as ChatGPT has moved into its more sophisticated version in 2023, distinguishing ‘chatbot’ AI-generated text from human authored work created for the same subject is often very difficult. The academic community is starkly divided between those who welcome the ways that AI can enhance learning, versus those who abhor the notion that ‘chatbots will do students work for them’ – a dispassionately anti-intellectual, anti-critical thinking plagiarism machine.2
This brief critical discussion weighs the respective merits of these AI application arguments. It seems likely that as AI technologies continue their relentless advance across modern society, AI is a reality whose power sensible educators will harness, and not fear.
AI-generated text – a primer
ChatGPT and other AI-based text / image generation technologies are part of a continually evolving Large Language Model (LLM) technology sphere. Chatbot technology developers essentially ‘train’ their system by creating algorithms that enable the system to ‘scrape’ information from many billions of websites and other online data sources. The vast amount of assembled data thus permits the chatbot – in theory – to answer any question or satisfy any text command that its user provides as instruction. There is a clear correlation observed between the instruction quality, detail, or complexity, and the AI-generated text provided through the chatbot.3 A simple question such as ‘Who is Joe Biden’ will generate an answer that is far less extensive than this variation: ‘What 10 factors distinguish the current Biden Administration’s approach to international free trade from its Trump Administration predecessor?’
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
One can readily imagine that if AI-generated text was understood and applied in ways that built better user knowledge in particular subject, a clear societal benefit might result. In what is usually a matter of seconds, an individual who knows nothing about either hypothetical ChatGPT question posed above is given the desired information in an easily digestible format. In other words, chatbots provide portals to an infinite digital space where new information sources are minted online every second.4
Other ChatGPT applications have more ominous overtones – it would be naïve to accept that all AI-generated material is a force for good. An important ‘irresistible force versus immovable object’ point is reinforced here. Digital technological developments are relentless. The notion that governments can impose a moratorium concerning AL development of any kind seems far-fetched, given that ChatGPT and other LLM alternatives are being continually revised through open source means (i.e. sophisticated tech developers who might be based anywhere in the world). Similar to the famous ‘genie in the bottle’ fairy tale, it is strongly doubted that these AI developments will cease simply because even influential international community leaders (political and industrial) declare this intention.5
In his recent (May 2023) ChatGPT review, Glorin Sebastian makes two excellent points that favour treating all AI developments with extreme caution. Sebastian explains how chatbots also possess the capacity “… to generate text that closely mimics human conversation based on their training data” (the algorithms noted above).6 For this reason, Sebastian asserts that: (1) a significant risk logically exists that thee systems can be manipulated in ways that “widely disseminate inaccurate or deceptive information” across many digital networks accessible to large, potentially gullible audiences; (2) this chatbot technology misuse could increase a variety of serious societal problems, including increased political divisiveness, or sectarian mistrust.7 Further, there is an emerging view regarding AI benefits that arguably provides a counterweight to those who argue chatbots pose threats to society through their potential for unrestrained growth.8
Conversely, there is also merit in the view that AI can provide excellent support to professionals working in disciplines where continuing education (CE) is essential to maintaining best possible practice standards. Writing from a nursing – PE perspective, Berse et al observe that the AI risks highlighted above must never be discounted.9 They equally emphasize that in any assessment of overall AI – chatbot utility, when the technology’s power is harnessed to assemble better CE materials and information access, ChatGPT and its counterparts will likely make important contributions “to enhancing nurses’ knowledge and skill levels, providing rapid and accurate information, and improving time management…”10 Specific AI – academic dishonesty concerns are now considered.
AI and academic dishonesty
One can immediately grasp why AI-text generation is a source of significant concern for many academic institutions in 2023 and beyond. The two hypothetical chatbot questions discussed above could be repurposed into a university term paper, or higher level dissertation topic. An unscrupulous student or seasoned scholar could access their ‘bot’, provide it with different combinations of instructions, and then collate the responses generated into a single, cohesive submission without ever declaring where the text originated.
A more sophisticated approach to using this material for any academic purposes might include the ‘author’ changing the AI-generated text or adding their own thoughts and supporting cited sources to make the work appear more ‘original’. In each case, the chatbot user is not conducting what scholars universally recognize as original work.
This conduct clearly contravenes most current academic institutional rules devised to prohibit academic dishonesty in all forms. The following UK policy (Oxford University) extract is suggested as a typical example: AI is only permitted “where specific prior authorisation has been given” to the student or higher level researcher.11 Alternatively, AI is allowed at Oxford where a professor (tutor) and the student have agreed that AI is part of a “reasonable adjustment for a student’s disability” (notably voice recognition software permitting transcriptions, or spelling / grammar checking).
In the past year, much has been written regarding how AI can be either detected when submissions are being graded or perhaps prevented altogether. There are several sophisticated AI-focused anti-plagiarism software packages now being utilized in academic institutions world-wide. Determining whether they function as intended is difficult given the recency of all chatbot developments and a lack of empirical data concerning this subject. There is little doubt expressed here that the potential for an AI-driven plagiarism detection arms race is clear – chatbot inventors seeking ways to ‘beat’ the newest detection systems. Whether intellectual energy and development resources should be devoted to this narrow societal sphere is open to debate.
Chatbot use prevention has two dimensions. When considered from an ethics vantage point, anyone tempted to use chatbots in contravention of clearly stated academic rules must be able to live with themselves – they are cheating. One might also suppose that in many instances where the institution seeks to ensure that students are not using AI technology in violation of such policies, there is a simple two-part solution. In some courses, students would be required to complete assessments in a closed book, supervised test environment where no outside study aids of any kind are permitted. Alternatively, in other courses the professor or course leader could administer oral examination. Chatbots might aid a student’s examination preparation, but they will not necessarily carry the student to course or final degree success in these circumstances.
‘Sensible educators’ and academic writing
It is suggested that those academic community members who express fear that AI-generated text will ultimately destroy all learning (presumably taking proper academic research and writing with it) have entirely missed the point regarding what ChatGPT and other digital technology applications represent - powerful learning tools. There have always been students, professors, and otherwise esteemed scholars who have taken shortcuts in their academic research and writing.12 Using another person’s work without proper attribution is possibly lesser carefulness than outright cheating, but over the past three centuries a core philosophy anchors how honest scholarship is presented – original thinking appropriately supported by careful, consistent, and rigorous referencing.
The specific referencing / citation format that researcher – writers may choose is usually dictated by a combination of accepted conventions in each academic field, and personal preference.13 For example, in the American legal writing tradition, ‘Bluebook’ citations (characterized by its unique footnoting and References list style) have been an accepted standard for almost 100 years.14 In other instances, preferences are expressed for well-known Bluebook alternatives such as Harvard, MLA (Modern Languages Association), or APA (American Psychological Association) formats.15 An interesting point links, AI-generated text, academic honesty, and referencing. No matter how sophisticated AI technologies may become, it is very difficult for chatbots to fully replicate the research trail taken by a true scholar assembling the sources they believe will given their work credibility and persuasive appeal for an intended audience.16
Conclusion – how Bluebook referencing was integrated into this article
The different arguments raised above regarding AI and academic writing will undoubtedly remain provocative for the foreseeable future. ChatGPT and its rival applications will continue to evolve. However, these AI developments also underscore how and why Bluebook referencing and its alternatives have continued, fundamental importance for anyone who aspires to produce excellent, original scholarship. By its nature, the Bluebook citation rules are intricate.17 They usually require greater care in their use than those defining other referencing systems. Once learned and properly applied, Bluebook provides an article reader with reasonable evidence that the writer ‘knows their stuff’.
1 Glorin Sebastian, Hello! This is your new HR Assistant, ChatGPT! Impact of AI Chatbots on Human Resources: A Transformative Analysis (May 28, 2023) 3-5
2 Open AI ChatGPT (2023) < https://openai.com/chatgpt> accessed December 15, 2023.
3 GFN Mvondo, Generative Conversational AI And Academic Integrity: A Mixed Method Investigation To Understand The Ethical Use of LLM Chatbots In Higher Education (August 22, 2023) 2, 4-7
4 Id.
5 Matt Perkins, Academic Integrity considerations of AI Large Language Models in the postpandemic era: ChatGPT and beyond 20(2) Journal of University Teaching & Learning Practice 7 (2023).
6 Glorin Sebastian, Exploring Ethical Implications of ChatGPT and Other AI Chatbots and Regulation of Disinformation Propagation (May 29, 2023) 3, 4
7 Id, 4, 5.
8 EU General Data Protection Regulation 2016 (GDPR).
9 Soner Berşe, Kamile Akça, and Ezgi Dirgar, The Role and Potential Contributions of the Artificial Intelligence Language Model ChatGPT. Ann Biomed Eng (2023) 3-6
10 Id.
11 Oxford Students, Plagiarism < https://www.ox.ac.uk/students/academic/guidance/skills/plagiarism>
12 Moses Fegher, Issues of Plagiarism in Academic Writing (April 26, 2023)
13 Purdue OWL (Purdue University) Bluebook Citation for Legal Materials
14 Id.
15 See e.g., Harvard University Harvard Guide to Using Sources
16 Patrick Ryan, Ethical Considerations in the Transformative Role of AI Chatbots in Education (June 8, 2023) pp.1-19
17 Ben Bratman, The Folly of the Embedded Full Citation: How the Bluebook and ALWD Manuals Encourage Weak Legal Writing 34 The Second Draft, 1, 3 (2021)
References (Bluebook)
Berşe,, Soner; Akça, Kamile and Dirgar, Ezgi The Role and Potential Contributions of the Artificial Intelligence Language Model ChatGPT. Ann Biomed Eng (2023) 1
Bratman, Ben The Folly of the Embedded Full Citation: How the Bluebook and ALWD Manuals Encourage Weak Legal Writing 34 The Second Draft, 1, 3 (2021)
Fegher, Moses Issues of Plagiarism in Academic Writing (April 26, 2023)
General Data Protection Regulation 2016 (EU)
Harvard University Harvard Guide to Using Sources
Mvondo, GFN Generative Conversational AI And Academic Integrity: A Mixed Method Investigation To Understand The Ethical Use of LLM Chatbots In Higher Education (August 22, 2023)
Open AI ChatGPT (2023) < https://openai.com/chatgpt> accessed December 15, 2023
Oxford Students, Plagiarism
Perkins, Matt Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond 20(2) Journal of University Teaching & Learning Practice 7 (2023)
Purdue OWL (Purdue University) Bluebook Citation for Legal Materials
Ryan, Patrick Ethical Considerations in the Transformative Role of AI Chatbots in Education (June 8, 2023) pp.1-19
Sebastian, Glorin Hello! This is your new HR Assistant, ChatGPT! Impact of AI Chatbots on Human Resources: A Transformative Analysis (May 28, 2023) 3-5
Sebastian, Glorin Exploring Ethical Implications of ChatGPT and Other AI Chatbots and Regulation of Disinformation Propagation (May 29, 2023)
Cite This Work
To export a reference to this article please select a referencing style below: