ChatGPT proposes a fascinatingly easy route to plagiarism
More stories from Eva Harshman
GPT-3 (Generative Pretrained Transformer 3) is a language generation model developed by OpenAI, capable of generating human-like text based on its training on a massive corpus of text data. ChatGPT is a variation of the GPT-3 model fine-tuned specifically for conversational language and is used in various applications like chatbots, language translation, and question-answering systems.
ChatGPT works by using a deep learning technique called transformer-based neural networks. The model has been trained on a large corpus of text data to predict the next word in a sentence, given the previous words as input.
In conversational systems, ChatGPT takes in a prompt or context, such as a question or conversation starter, and generates a response. The model uses its training to generate a probable continuation of the input text that is grammatically correct and semantically coherent, given the context.
The response is generated word-by-word, where each word is chosen based on the probabilities assigned by the model to the different possible words given the previous words in the sequence. The output is then refined through various post-processing steps to improve its fluency and coherence.
Here’s where things get interesting: everything before this sentence was written by the bot.
While somewhat wordy and scientific, it is extremely difficult to distinguish between the works of ChatGPT and words strung together through the cognitive abilities of an author. Undeniably, this program is extremely fun to use; the intelligence of the creation is astounding and easy to get sucked into.
Exploring every corner of ChatGPT could likely take a lifetime—the vast expansiveness of the bot is endless. However, the program is not without its limitations. For starters, the website itself mentions that there may be errors within the text that it produces. It may display false information as nothing is guaranteed.
However, ChatGPT is actually very advanced and works around requests that it may not be able to fulfill. I tried several times to ask the bot for a subjective opinion, such as if a name is a good name, but rather than getting an error message, I got data to support a general consensus instead. Additionally, facts about the subject matter were provided, so even if the bot itself didn’t form opinions, it supported ideas surrounding the topic.
Unfortunately, the powers of ChatGPT are already being abused. Students are harnessing the essay-writing prowess of the bot to complete their school assignments quickly; before the bot was widely-known, many of these papers seemed completely inconspicuous. On the other hand, once schools and other educational institutions caught wind of the illegitimate schoolwork being turned in using ChatGPT, they worked quickly to stop it.
The AP program through College Board has begun a crackdown on students using artificial intelligence to submit their essays or any component of their test. This is considered plagiarism and an exam violation, which could lead to AP test scores being identified as void.
There are many forms of identifying which pieces of writing are from AI, thankfully, and one of the most successful has been watermarking. Additionally, bots are being created to notify a reader if a work is plagiarized from an AI or not. Once I attempted to use these so-called fact-checking sites, though, I found little success.
I pasted the first four paragraphs of this very editorial into three different AI-detectors: Copyleaks, Sapling, and Writer.com’s AI Content Detector.
All three gave this AI-generated text a rating of 99-100% human-generated.
As terrifying as it seems, for my own peace of mind, I am under the assumption that these programs are in their early stages and may just be independent projects by someone with less experience. Nonetheless, it is certainly concerning that every resource I used could not identify the AI-generated writing.
From essay writing to artwork, AI has begun its route to edging into our daily media. While artwork is, in my opinion, easier to identify whether or not it is human-made due to unusual shapes and minor discrepancies, writing is clearly more difficult to distinguish. In order to maintain legitimacy in works all across the board, programs to identify artificially created media must be pushed forth with just as much effort as creating the forms of AI themselves.
Eva Harshman is a senior who is thrilled to be entering her fourth and final year on staff as Editor-in-Chief. Apart from writing for The Central Trend, she...