On the online review battleground, it’s AI vs. AI.
AI trained to detect fake reviews is up against generative artificial intelligence that can spit out reviews that look human. It’s the kind of shock that has implications for consumers as well as the future of online content.
Saoud Khalifah, founder and CEO of Fakespot, a startup that uses AI to detect fraudulent reviews, said his company has seen an influx of AI-generated fake reviews. Fakespot is working on a way to detect content written by AI platforms like ChatGPT.
“What is very different today is that the models are so knowledgeable that they can write about anything,” he said.
Fake online reviews have been around for as long as real online reviews, but the problem has taken on a new urgency thanks to broader concerns about the advanced AI technology now widely available on the internet.
After years of handling the problem on a case-by-case basis, the Federal Trade Commission last month proposed a new rule to crack down on fraudulent reviews. If passed, the rule would prohibit writing fake reviews, paying reviews, hiding honest reviews, and other deceptive practices, and impose heavy fines for those who violate it.
But now it’s less clear what a fake review is or isn’t, and the technology to detect fraudulent reviews is still a work in progress.
«We don’t know, we really have no way of knowing, to what extent bad actors are actually using any of these tools, and how much may be bot-generated versus human-generated,» said Michael Atleson, an attorney with the FTC’s Division of Advertising Practices. «It really is a more serious concern, and it’s just a microcosm of the concerns that these chatbots will be used to create all kinds of fake content online.»
There are some signs that AI-generated reviews are already commonplace. CNBC reported in April that some reviews on Amazon had clear indications of AI involvement, many beginning with the phrase «As an AI language model…»
Amazon is among the many online sellers that have battled fake reviews for years. A spokesperson said the company receives millions of reviews every week and has proactively blocked 200 million suspected fake reviews by 2022. The company uses a combination of human researchers and AI to detect fake reviews, using machine learning models that analyze factors such as a user’s review history, login activity and relationship with other accounts.
Complicating the issue further is the fact that AI-generated reviews aren’t entirely against Amazon’s rules. An Amazon spokesperson said the company allows customers to post AI-generated reviews as long as they are authentic and don’t violate policy guidelines.
The e-commerce giant has also indicated that it could use some help. In June, Dharmesh Mehta, Amazon’s vice president of worldwide selling partner services, called a company blog post for greater collaboration between «the private sector, consumer groups and governments» to address the growing problem of fake reviews.
The crucial question is whether AI detection will be able to outsmart the AI that creates fake reviews. The first AI-generated fake reviews detected by Fakespot came from India a few months ago, Khalifah said, produced by what he calls “fake review farms” — companies that sell fraudulent reviews en masse. Generative AI has the potential to make your job much easier.
«It’s definitely a tough test to pass for these detection tools,» said Bhuwan Dhingra, an assistant professor of computer science at Duke University. “Because if the models exactly match the way humans type something, then you can’t really distinguish between the two. I wouldn’t expect to see any detector pass the test with flying colors anytime soon.»
Several studies have found that humans are not particularly good at spotting reviews written by AI. Many techies and companies are working on systems to detect AI-generated content, and some like OpenAI, the company behind ChatGPT, are even working on AI to detect their own AI.
Ben Zhao, a professor of computer science at the University of Chicago, said it’s «nearly impossible» for AI to rise to the challenge of removing AI-generated reviews, because reviews created by bots are often indistinguishable from human ones.
«It’s an ongoing cat-and-mouse chase, but at the end of the day there’s nothing fundamental that distinguishes a piece of AI-created content,» he said. “You will find systems that claim they can distinguish between human-written text and ChatGPT text. But the underlying techniques are pretty simple compared to what they’re trying to achieve.»
With 90% of consumers say they read reviews while shopping online, that’s a prospect that worries some consumer advocates.
“It’s scary for consumers,” said Teresa Murray, who heads the consumer watchdog office of the US Public Interest Research Group. “AI is already helping dishonest companies post thousands of real-sounding, conversational-tone reviews in a matter of seconds.”