BOOK THIS SPACE FOR AD
ARTICLE ADAccording to a BestColleges survey, more than half of students use AI to cheat. Those numbers are in line with a Stanford University study that found 60 to 70 percent of students cheat. However, AI may soon cease to be the lazy student's answer to writing papers. A Wall Street Journal (WSJ) story reports that "OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper" -- with 99.9% accuracy.
Also: How AI lies, cheats, and grovels to succeed - and what we need to do about it
As my colleague David Gewritz has pointed out, many programs already promise to detect AI-written text. However, he concluded, "I certainly wouldn't feel comfortable threatening a student's academic standing or accusing them of cheating based on the results of these tools."
OpenAI hasn't revealed in any detail how its new method can be near-perfect in identifying AI-written text. It certainly isn't because it can spot AI hallucinations. It can't. As OpenAI co-founder John Schulman said last year, "Our biggest concern was around factuality because the model likes to fabricate things."
That will never change. According to Mohamed Elgendy, co-founder and chief executive of Kolena, a machine learning testing service, "The rate of hallucinations will decrease, but it is never going to disappear -- just as even highly educated people can give out false information."
Instead of some magical deep way of spotting AI text, it appears OpenAI is using a much simpler way of identifying AI-written text: The service may be watermarking its results.
In a newly revised blog post, Understanding the source of what we see and hear online, OpenAI reveals it's been researching using classifiers, watermarking, and metadata to spot AI-created products. We don't know yet how this watermarking works exactly.
Also: The best free AI courses (and whether AI certificates are worth it)
We do know that OpenAI reports it's "been highly accurate and even effective against localized tampering, such as paraphrasing." However, the watermarking is "less robust against globalized tampering."
That means the feature doesn't work well on translated text or something as mindlessly simple as inserting special characters into the text and then deleting them. And, of course, it can't spot works from another AI model. For instance, if you feed the ChatAPT AI-text spotter a document created by Google Gemini or Perplexity, it probably won't be able to identify it as an AI-created document.
In short, with a little more effort, students and writers will still be able to pass an AI chatbot's work off as their own. Well, they can try anyway. In my experience with AI, the results still tend to be second-rate at best. But if that's good enough to get you a passing grade, it may be all you need.
At least one self-professed professor on Reddit isn't impressed: "The problem is that you can just copy-paste the text into another program, translate it into another language, and then translate it back. But honestly, most students aren't going to do that, so it would catch pretty much everyone."
Also: How Pearson's AI assistant can help teachers save time
Of course, that might not bother OpenAI CEO Sam Altman, who told The Harvard Gazette, "Cheating on homework is obviously bad. But what we mean by cheating and what the expected rules are does change over time."
I don't know about that. Cheating is cheating, but this new tool in the OpenAI arsenal doesn't sound like it will help much to prevent it.
Oddly, while OpenAI is still wrestling with when -- or indeed if -- it should release this new service, the company will soon release a DALL·E 3 provenance classifier. This means that, eventually, almost every image you make with DALL-E will be marked as a DALL-E AI creation. OpenAI relies on C2PA metadata, a digital content standard, to mark and identify images. If you're a graphic designer who's been relying on DALL-E to make "original" graphics, it may be time to return to Photoshop.