Jobs in Education System

AI-Generated academic content: A controversy in education

Jashodhara Jindal, Class 12 student, Pathways School Noida

 

virtual worldThe use of AI-generated content is roiling academia. The question of whether using AI content is a form of plagiarism has become embroiled in polemics, with valid arguments on either side. Regardless, plagiarism, or copying without giving credit, is a serious violation of academic integrity. OpenAI’s ChatGPT has become inescapable in recent months. By leveraging an extensive database and employing natural language processing techniques, the AI Chatbot generates responses to user queries that are almost identical to human language. It is already being used widely to complete academic assignments, reports, research and more.

ChatGPT-generated content might be considered plagiarism because ChatGPT does not credit its sources. But whether copying from ChatGPT itself is plagiarism is a contentious topic.

OpenAI, the owner of ChatGPT, in its terms of use, assigns to the user “all its right, title and interest in the output.” The company does not exercise copyright over its generated output; in fact, it freely passes on the aggregate of all legal rights towards the ownership and possession
of that output, to the user. The user could hardly be accused of plagiarising the material for which she owns the right and title.

ChatGPT uses a large language model to generate its responses. This model’s training dataset includes many web pages, essays, magazine articles, internet content and books. Some of those works may be protected by copyright, but given that the algorithm claims it does not copy-paste, infringement or plagiarism may be hard to prove or enforce. The gap, perhaps in technology but more likely in intent, is that OpenAI, unlike other prominent AI chatbots such as Perplexity AI, does not disclose its sources, or percentage contribution of the individual
texts to the final output. OpenAI puts the onus on the users, to ensure that the way they use the output does not violate any laws or guidelines.

When the famous political pundit Fareed Zakaria was accused of plagiarism years before AI chatbots became known, his former editor Michael Kinsley examined the fine line that writers walk, between ordinary research and borrowing without attribution. Several writers
today, he suggested, are almost entirely “collecting and rearranging stuff that is the work of other people.” Perhaps ChatGPT is merely making this process more efficient. But in the case of students, it is questionable whether this process needs to be made more efficient.

There are other, less problematic ways of using AI chatbots. Search engines are already commonly used for research. ChatGPT may allow for a considerably expanded reading list, by saving time and effort in generating summaries of individual pieces. ChatGPT also offers multiple outline suggestions and can serve as a debate sparring partner for practice; it can expose the student to many different points of view and help them absorb vast amounts of reading material in a short time. Neither of these applications, by themselves, would be seen as academic dishonesty, merely as perspective-broadening and productivity-enhancing tools.

Artificial Intelligence, used smartly, may well be a force multiplier for human ingenuity.

A leading argument favoring AI-generated content is that the output is a product of the user’s skill in supplying high-quality prompts. A simple experiment is enough to validate or invalidate this notion.

Based on a history assignment I received in eleventh grade, through ChatGPT, I generated two essays on Adolf Hitler’s rise to power using one low-quality prompt: “Write essay on Hitler rise to power 500 words” and one high-quality prompt: “Write an essay on Hitler’s rise
to power in Germany. Include all essential points such as the Beer Hall Putsch, propaganda, post-World War effects, economy, etc., and make sure to include necessary details.

500 words.” I then sent these two essays to my history teacher for evaluation, under the International Baccalaureate History Paper 2 grading criteria. He found that both essays were of a similar grade level, and couldn’t identify which essay was generated using which prompt. Though an isolated result, this finding is definitely indicative of the fact that, for a student simply looking to finish their homework by putting in the least possible amount of effort, ChatGPT is more an autonomous engine for plagiarism, than a productivity-enhancement tool.

Academic institutions are wary of ChatGPT, and rightfully so. The tenets of education are the importance of individual effort and critical thinking. Submitting AI-generated assignments compromises learning. Importantly, students ‘think’ when they write; the process of cogitation that takes place within their minds is as valuable, if not more, as the output itself. If all the ‘thinking’, ‘analyzing’, and ‘putting together’ is outsourced to a chatbot, the fundamental purpose of the exercise would be defeated.

With the disruption caused by ChatGPT and other AI chatbots, an evolution of testing methods may be on the cards. Lillian Edwards, a prolific academic in the field of Internet Law was quoted recently saying, “At the moment, it’s looking a lot like the end of essays as an assignment for education.” Banning ChatGPT may become practically unenforceable in the not-so-distant future, as its usage will become increasingly hard to detect. Guidelines on its ethical and responsible utilization may be more beneficial, both for students as well as for
teachers and examiners.

Also read: How Artificial Intelligence and ChatGPT are transforming education

Current Issue
EducationWorld April 2024
ParentsWorld February 2024

Xperimentor
HealthStart
WordPress Lightbox Plugin