In the last two years, ChatGPT has turned the academic and business worlds upside down with its ability to generate coherent, well-written copy about pretty much any subject on earth in a matter of seconds.
The chatbot’s remarkable abilities have seen students of all educational levels turn to the chatbot – as well as the best ChatGPT alternatives, namely Bard– to write complex essays that would otherwise take hours to finish.
Their release kickstarted an ongoing global conversation about a new phenomenon, often referred to as “ChatGPT plagiarism”. This guide covers the tools businesses and educational institutions are using to detect ChatGPT plagiarism, the dangers of cheating with ChatGPT – and whether using ChatGPT even counts as plagiarism at all.
- How to Detect ChatGPT Plagiarism
- Most Popular AI and ChatGPT Plagiarism Checkers
- Do AI & ChatGPT Plagiarism Checkers Actually Work?
- OpenAI’s AI Text Classifier: A Case Study
- Is Using ChatGPT or Bard Actually Plagiarism?
- The Dangers of Cheating With ChatGPT
- Does ChatGPT Plagiarize?
- Does Bard Plagiarize?
- Do Other AI Tools Plagiarize?
- Should I Use ChatGPT for My Essays or Work?
How to Detect ChatGPT Plagiarism
To detect ChatGPT plagiarism, you need an AI content checker. AI content checkers scan bodies of text to determine whether they’ve been produced by a chatbot such as ChatGPT or Bard, or by a human. However, as we’ll cover later on, many of these tools are far from reliable.
It’s slightly harder to detect plagiarism when it comes to code, something ChatGPT can also generate capably. There’s not quite the same ecosystem of AI detection tools for code as there is for content. In 2024, this has grown even vaster,
However, if you’re in a university environment, for example, and you’re submitting code well beyond your technical level, your professor or lecturer may have some very reasonable suspicions that you’ve asked ChatGPT to help you out.
The Most Popular AI and ChatGPT Plagiarism Checker Tools Reviewed
Since ChatGPT’s launch in November 2022, lots of companies and educational institutions have produced AI content checkers, which claim to be able to distinguish between artificially generated content and content created by humans. Now, a lot of companies are using Google’s chatbot Bard too, which uses a different language model.
However, the purported accuracy of even the most reputable AI content detection tools is fiercely disputed and court cases between students falsely accused of using AI content and education have already materialized.
The bottom line is this: No tool in this space is 100% accurate, but some are much better than others.
GPTZero
GPTZero is a popular, free AI content detection tool that claims that it’s “the most accurate AI detector across use-cases, verified by multiple independent sources”.
However, Back in April, a history student at UC Davis proved that GPTZero – an AI content detection tool being used by his professor – was incorrect when it labeled his essay as AI-generated.
We tested GPTZero by asking ChatGPT to write a short story. GPTZero, unfortunately, was not able to tell that the content was written by an AI tool:
Duplichecker
Duplichecker is one of the first AI content detection services that will appear if you simply search for the term on Google. It claims to be 100% accurate at detecting AI content when presented with text, and is completely free to use.
However, as you can see from the result below, Duplichecker was not only unable to identify this text was written by ChatGPT, but it actually concluded that it was 100% human-generated – even though none of it was.
Writer
Writer is an AI content detection that, to be fair to it, doesn’t claim to be 100% accurate, and advises you treat its judgments as an indication. It’s a good thing too, because the free version of Writer told us that the text below is 100% human-generated – but it’s actually just the first half of a story we asked ChatGPT to generate.
Funnily enough, when we pasted in the introduction of a recently-written Tech.co article that had no AI-generated content included, it came back as only 69% human-generated.
Writer’s has paid plans, but judging by the performance of its free tool, we wouldn’t recommend them. The Team plan costs $18 per user, per month for up to five users. There’s also an enterprise plan with custom pricing options.
Originality.ai
Originality.ai is certainly one of the more accurate AI content detection tools currently available, according to our research and testing.
The company has conducted an extensive study into AI content detection tools, feeding 600 artificially generated and 600 human-generated blocks of text to its own content detection system, as well as other popular tools that claim to fulfill a similar purpose.
As you can see from the results below, Originality.ai outperformed all of the tools included in the test:
The only downside to Originality.ai is that there isn’t a free plan, and you can’t even test it out for free as you can with the other apps included in this article. it costs $20 for 2,000 credits, which will let you check 200,000 words.
Copyleaks AI Content Detector
Copyleaks is a free-to-use AI content detector that claims to be able to distinguish between human-generated and AI-generated copy with 99.12% accuracy.
Copyleaks will also tell you if specific aspects of a document or passage are written by AI, even if other parts of it seem to be written by a human.
Copyleaks says it’s capable of detecting AI-generated content created by “ChatGPT, GPT-4, GPT-3, Jasper, and others”.
CopyLeaks Costs $8.33 per month for 1,200 credits (250 words of copy per credit). It’s used, the company says, by over 1,000 institutions and 300 enterprises across more than 100 countries.
In a test carried out by TechCrunch in February 2023, however, Copyleaks incorrectly classified various types of AI-generated copy, including a news article, encyclopedia entry, and a cover letter as human-generated. Furthermore, Originality.ai’s study referenced above only found it to be accurate in 14.50% of cases – a far cry from the 99.12% accuracy claim CopyLeaks makes.
However, when we tested it, it did seem to be able to pick up that the text we entered was generated by ChatGPT. This happened in both our 2023 and 2024 tests:
During testing, Copyleaks was also able to correctly recognize human-generated text on several occasions. Despite the poor showings on other tests, it looks to be a better and more trusted option than some of the other tools featured in this article.
Turnitin Originality AI Detector
Turnitin is a US-based plagiarism detection company that is deployed by a variety of universities to scan their students’ work. Turnitin is designed to detect all kinds of plagiarism, but revealed in April that it’s been investing in an AI-focused team for some time now as it launched its AI content detection capabilities.
Turnitin their tool can detect “97 percent of ChatGPT and GPT3 authored writing, with a very low less than 1/100 false positive rate”.
However, the company also says that content if it flags a piece of content as AI-generated, this should be treated as an “indication, not an accusation”. They also provide an extensive explanation of how they deal with false positives, and warn about taking AI outputs with a pinch of salt.
The true accuracy of Turnitin’s AI detector was disputed by the Washington Post last year, as well as other sources. You’ll have to contact the company directly if you want to purchase the software or need more information on how it works, the website says – but it’s only really suitable for academic purposes.
Does AI Content Detection Actually Work?
As Turnitin knows, no AI content detection tool is 100% reliable – our tests prove that pretty resoundingly. Duplichecker – a top result on Google that claims to be “100% accurate” on its landing page – fell at the first hurdle.
However, the other tools we’ve discussed today actually claim to be 100% accurate, and very few claim to be free of false positives. Others, like GPTZero, post disclaimers about taking their results as gospel.
A number of university students accused of using artificial intelligence to produce essays have already been forced to prove that their work was original.
In Texas, in March of last year, a professor falsely failed an entire class of students after wrongfully accusing them of using ChatGPT to write essays. There is also a collection of reports – and studies like the one conducted by Originality.ai – that suggest that even the most capable plagiarism checkers aren’t nearly as accurate as they claim.
Even Turnitin’s AI content detector isn’t foolproof. In the recent, relatively small test conducted by the Washington Post we discussed earlier, its accuracy fell far short of the 98% they claim to be able to produce.
Originality.ai, on the other hand, is certainly one of the more robust ones available – and even its detection technology isn’t right every single time. However, having tested a variety of these tools, it seems to be the exception to quite a broad rule.
Besides, if false positives exist in any capacity, then there will always be room for students to claim their work is original and has simply been misidentified.
OpenAI’s AI Text Classifier: A Case Study
OpenAI, owners of ChatGPT, used to have its own plagiarism checker. We know this, because we used it ourselves when originally writing this article. However, back in July 2023, the company withdrew the tool, stating that it wasn’t accurate enough.
That aligns with our own experience when we tested it. When we showed it a short story, written by its own ChatGPT tool, the checker didn’t pick up on the fact that it was AI-generated.
As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated. – OpenAI blogpost
You can see our original example of the checker missing the fact that text was AI written, below:
Is Using ChatGPT or Bard Plagiarism?
It’s debatable whether ChatGPT is in fact plagiarism at all. Oxford Languages defines plagiarism as “the practice of taking someone else’s work or ideas and passing them off as one’s own.”
ChatGPT is not a person, and it’s not simply reproducing the work and ideas of other people when it generates an answer. So, by the dictionary definition, it’s not outright plagiarism.
Even if it was doing that, if you were honest about where it came from (i.e. ChatGPT), arguably, that wouldn’t be plagiarism anyway.
However, some schools and universities have far-reaching plagiarism rules and consider using chatbots to write essays as such. One student at Furman University failed his philosophy degree in December 2022 after using ChatGPT to write his essay. In 2023 case, a professor at Northern Michigan University reported catching two students using the chatbot to write essays for their class.
Using ChatGPT to generate essays and then passing this off as your own work is perhaps better described as “cheating” and is definitely “dishonest”.
The whole point of writing an essay is to show you’re capable of producing original thoughts, understanding relevant concepts, carefully considering conflicting arguments, presenting information clearly, and citing your sources.
There’s very little difference between using ChatGPT in this way and paying another student to write your essay for you – which is, of course, cheating.
With regard to Google’s Bard, the answer is a little more complicated. The same line of logic used above applies to Bard as it does to ChatGPT, but Bard has been marred by accusations of plagiarism and incorrectly citing things it pulls from the internet in a way ChatGPT hasn’t. So, using Bard might lead to you inadvertently plagiarizing other sources (more on this below).
The Dangers of Cheating With ChatGPT
Christopher Howell, an Adjunct Assistant Professor at Elon University, asked a group of students back in 2023 to use ChatGPT for a critical assignment and then grade the essays it produced for them.
He reported in a lengthy Twitter thread (the first part of which is pictured below) that all 63 students who participated found some form of “hallucination” – including fake quotes, and fake and misinterpreted sources – in their assignments.
Does ChatGPT Plagiarize in Its Responses?
No – ChatGPT isn’t pulling information from other sources and simply jamming it together, sentence by sentence. This is a misunderstanding of how Generative Pre-trained Transformers work.
ChatGPT – or more accurately the GPT language model – is trained on a huge dataset of documents, website material, and other text.
It uses algorithms to find linguistic sequences and patterns within its datasets. Paragraphs, sentences, and words can then be generated based on what the language model has learned about language from sequences in these datasets.
This is why if you ask ChatGPT the same question at the same time from two different devices, its answers are usually extremely similar – but there will still be variation, and sometimes, it offers up completely different answers.
Does Bard Plagiarize in Its Responses?
ChatGPT’s biggest rival, Google’s Bard has had significantly more issues with plagiarizing content since its launch than its more popular counterpart. Technology website Tom’s Hardware found that Bard had plagiarized one of its articles, and then proceeded to apologize when one of its staff called it out.
In May 2023, PlagiarismCheck told Yahoo News that they generated 35 pieces of text with Bard, and found it plagiarized above 5% in 25 of them by simply paraphrasing existing content already published on the internet.
One big difference between Bard and ChatGPT that can perhaps explain this is that Bard can search the internet for responses, which is why it tends to deal better with questions relating to events after 2021, which ChatGPT struggles with. However, this seems to also mean it pulls data from sources in a less original way and cites its sources more often.
These examples may have been blips, but it’s good to know the risks if you’re using Bard for important work.
Do Other AI Tools Plagiarize?
Unfortunately, yes – and some companies have already embarrassed themselves by using AI tools that have plagiarized content. For example, CNET – one of the world’s biggest technology sites – was found to be using an AI tool to generate articles, and wasn’t transparent about it at all. Around half of the articles that CNET published using AI were found to have some incorrect information included.
To make matters worse, Futurism, which launched an investigation into CNET’s AI plagiarism, said that “The bot’s misbehavior ranges from verbatim copying to moderate edits to significant rephrasings, all without properly crediting the original”.
AI tools that don’t generate unique, original content – be it art or text – have the potential to plagiarize content that’s already been published on the internet. It’s important to understand exactly how the language model your AI tool is using works and also have tight oversight over the content it’s producing, or you could end up in the same position as CNET.
Should You Use ChatGPT for Essays or Work?
Using ChatGPT for Essays
The fact that ChatGPT doesn’t simply pull answers from other sources and mash sentences together means businesses have been able to use ChatGPT for a variety of different tasks without worrying about copyright issues.
But its internal mechanics also mean it often hallucinates and makes mistakes. It’s far, far from perfect – and although it’s tempting to get ChatGPT to write your essay for university or college, we’d advise against it.
Every educational institution’s specific submission guidelines will be slightly different, of course, but it’s vastly likely that it is already considered “cheating” or plagiarism” at your university or school. Plus, regardless of how accurate they are, educational institutions are using AI content detectors, which will improve over time.
Using ChatGPT at Work
Of course, lots of people are using ChatGPT at work already – it’s proving useful in a wide range of industries, and helping workers in all sorts of roles save valuable time on day-to-day tasks.
However, if you are using ChatGPT at work, we’d advise being open with your manager or supervisor about it – especially if you’re using it for important activities like writing reports for external stakeholders. It’s one of the more immediate ethical considerations relating to AI that businesses need to answer.
We’d also strongly advise both heavily editing and closely reviewing all of the work you’re using ChatGPT, Bard, or any other AI tool to generate. It’s unwise to put sensitive personal or company information into any chatbot – we know ChatGPT saves and uses user data, but there isn’t much public information about where these chats are stored or OpenAI’s security infrastructure.
Using Other AI Tools for Essays or Work
Of course, Bard and ChatGPT aren’t the only AI chatbots out there – like Claude, for example. However, we’d be hesitant to throw our support behind any smaller AI tools that aren’t backed by powerful language models. They won’t be as well-resourced, and you’re unlikely to find them as useful if you do experiment with using them for work.
The same rules still apply, however – be open with your manager and get sign-off on using them, don’t input any sensitive company data, and always review the answers you’re given.