OpenAI, the company behind DALL-E and ChatGPT, has released it Free tool which it says aims to “distinguish between human-type text and AI-type text.” Warns that the workbook is “not completely trusted” in Press release and “should not be used as a primary decision-making tool.” According to OpenAI, it can be useful in trying to determine if someone is trying to pass off generated text as something someone typed.
The tool, known as a classifier, is relatively simple, although you must have a free OpenAI account to use it. Just paste text into a box, click a button, and it will tell you if it thinks the text is very unlikely, unlikely, unclear whether it was AI-generated, or likely to be.
In tests, it identified 26 percent of AI-written text as “likely AI-written.”
OpenAI says in its press release that it trained the model running the tool using “human-type and AI-typed text pairs on the same topic.”
However, it does offer some caveats about using the tool. Above the text box, the company notes some limitations:
It requires a minimum of 1000 characters, which is approximately 150 – 250 words.
The classifier is not always accurate; It can be mistaken for AI-generated text and human-typed text.
The AI-generated text can be easily edited to avoid the workbook.
The classifier is likely to be mistaken for text written by children and for texts that are not in English, because it has been trained mainly on English content written by adults.
The company also says that it will sometimes “incorrectly but confidently” classify human-written text as from an AI, especially if it is very different from anything in the training data. He explains that the workbook is still very much a “work in progress”.
These warnings seem justified — I’ve run a few snippets of my own work through the tool, and they’re all marked “Highly Unlikely to Be Produced by AI.” (Fooled them again.) However, he also said that it was not clear if This is amazing Buzzfeed News Article Written by AI, despite the notice at the bottom that says “This article was written entirely by ChatGPT.”
I also got a “blurred” result for Some Articles written by CNET MoneyWith others Getting a Rating “unlikely”. The outlet says these articles were “helped by an AI engine and reviewed, verified, and edited by our editorial staff,” so there’s likely to be some human editing in there (especially since Cnet He added corrections to more than half of them). While CnetThe owner didn’t say what specific system he uses for the articles, my co-worker Miya Sato reports that she uses a tool called Wordsmith for some of her content. OpenAI says its tool isn’t just for GPT, and that it should detect “text written by AI from a variety of providers.”
I don’t mean to imply that the OpenAI classifier doesn’t work at all. I ran Some Examples from ChatGPT replies that People have posted through it, and most of them are marked as “likely” or “likely” AI-generated. OpenAI also says in its tests that the tool labeled AI-written text as “likely AI-written” 26 percent of the time and gave false AI detections 9 percent of the time, outperforming its previous AI-detected text detection tool. artificial.
OpenAI isn’t the first to create a tool for ChatGPT-generated text detection; After the chatbot went viral almost immediately, so did it Sites like GPTZerowhich was made by a student named Edward Tian to “discover plagiarism in artificial intelligence”
One place OpenAI is really focusing with this detection technology is education. Its press release states that “AI-based text identification was an important point of discussion among teachers,” as different schools interacted with ChatGPT through ban or embracing He. She. The company says it is “engaging with teachers in the US” to see what they see from ChatGPT in their classrooms and is asking for feedback from anyone involved in the education.