AI Briefing: FTC cracks down on deceptive marketing tactics, fake reviews

Next month, the Federal Trade Commission’s new rules on fake reviews will take effect as part of a crackdown on misleading marketing and tainted testimonials from both humans and AI models. 

The rules come as part of the FTC’s broader actions against companies related to AI. Earlier this week, the agency made a separate move when it announced legal actions against five companies as part of a wider crackdown on companies’ deceptive marketing and claims about AI products and services.

“Using AI tools to trick, mislead, or defraud people is illegal,” FTC chair Lina Khan said in a statement. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”

As for the agency’s new regulations for fake reviews, the updates were approved last month and will go into effect Oct. 24. The rules update an existing ban on fake reviews in ways that aim to clarify, strengthen and enforce laws to protect consumers and add new penalties. Along with addressing fake reviews from celebrities, employees, regular people, and fake identities, the changes also address fake reviews created using generative AI tools — a risk for consumers, businesses and online information overall.

The rules regulate businesses’ websites, independent review sites, social media platforms, advertising content and other types of marketing material. Here’s a look at what the changes address, what they don’t address, and other information to know before enforcement begins in a few weeks.

What kinds of reviews are banned

The new rules outline several types of reviews banned by the FTC. Businesses can’t create, buy, or sell consumer or celebrity reviews that misrepresent a reviewer’s experience. The rules also ban offering incentives for positive or negative reviews, block companies from reviews by executives or employees without proper disclosures, and restrict solicited reviews from relatives. Businesses also can’t misrepresent their website’s review section as independent, can’t threaten reviewers and can’t buy or sell fake influence in the form of followers or view counts in a commercial capacity.

“AI tools make it easier for bad actors to pollute the review ecosystem by generating, quickly and cheaply, large numbers of realistic but fake reviews that can then be distributed widely across multiple platforms,” reads a footnote in the rules’ finalized text. “AI-generated reviews are covered by the final rule, which the Commission hopes will deter the use of AI for that illicit purpose.”

The FTC’s updates also help clarify and consolidate existing rules into a more cohesive set, said Mary Engle, evp of policy at BBB National Programs. A former member of the FTC’s advertising practice division, Engle said it seems the agency is trying to distinguish “which particularity practices that would always be illegal versus some that might not always be illegal or would be harder to draw the line.” Instead, the rules go after clearly illegal or deceptive behavior.

When it comes to AI-generated content, large language models make it harder to identify if something is part of a network of fake reviews orchestrated around the globe. However, proper disclosures are also key with reviews, advertising and other types of reviews and endorsements. 

While some might think reputable companies try to avoid fake reviews to protect their brand, that’s not always the case. However, Engle thinks the risk of higher fines and damage to reputations might motivate companies and reviewers to comply and preserve their goodwill with customers. “One of the nice things about the internet is that the truth will be found out about fakery, and then it can really form a backlash,” she said.

“The reason these rules are important is because everyone relies on reviews,” Engle said. “You need them to be valuable and legitimate. But because everyone relies on reviews, there’s a huge incentive to fake them or bloat them in some way. I think the FTC is trying to counteract the incentives so that it happens less frequently.”

Endorsement, enforcement and the risks of AI

Companies like Google and Yelp have also endorsed the changes, but some experts don’t think the updates go far enough even if they’re a step in the right direction. Rather than regulating social media and e-commerce giants, the rules just prevent businesses from creating or making fake reviews. The fake review industry’s existence outside U.S. laws also creates regulatory challenges.

The marketplace is already saturated with so many fake online reviews,” said former criminal investigator Kay Dean, who’s now the founder of watchdog website Fake Review Watch. “With the advent of AI, I anticipate that the problem will only get worse. I’ve experimented with AI-generated reviews and was not surprised to see how easy it was to quickly spit out content. It’s not easy to actually point out whether fake reviews are written by a real person or generated by AI, however.”

Some say the FTC might have chosen to not regulate third-party platforms in order to avoid Section 230 issues regarding how the government is allowed to regulate social media. However, Dean said the FTC could have still taken other steps. For example, she said it could have required platforms to show users how many fake or deceptive reviews that platform has removed from a given business’s page, identify all reviewers more thoroughly, and provide users access to all reviews — including removed ones.

“These, and other specific recommendations I provided the FTC, would provide much more transparency for consumers to see what is actually going on,” she said. “Wouldn’t you want to know that Google or Yelp had removed dozens of fake reviews for a contractor whom you were considering hiring to complete a $50,000 kitchen remodel?”

Matt Schwartz, policy analyst at Consumer Reports, said his watchdog group is generally supportive of the new rules. Although the finalized rules removed some of the originally proposed text about issues like hijacked reviews on Amazon, he thinks the added transparency could improve, while higher penalties might help deter businesses from being bad actors. 

“The whole enforcement question holds the key,” Schwartz said.

Prompts and Products — AI News and announcements

  • Google announced new updates for Gemini and how a range of companies are using the large language model for various products and services.
  • Several key members of OpenAI departed the company, including CTO Mira Murati and top members of the research team. The startup is reportedly moving away from its nonprofit model to become a for-profit company. (It also rolled out the new advanced voice feature, which faced controversy this spring.)
  • Meta debuted a range of new AI updates at its annual Meta Connect event including Meta AI updates, a new Llama 3.2 model and new features for the Meta Ray Ban smart glasses.
  • The deep fake detection startup Reality Defender and Intel are tracking election related AI-generated misinformation.
  • Apple CEO Tim Cook and late night host Jimmy Fallon took a walk in Central Park as part of Apple’s efforts to market the new iPhone 16 and its Apple Intelligence features. 
  • More than 100 Hollywood actors and producers signed a letter urging California’s governor to sign AI safety legislation in the state.
  • During Climate Week at the UN, a nonprofit has a new art installation in New York City’s Bryant Park to raise awareness about the large amounts of energy and water it takes to power AI models.
  • Notion, the productivity app, announced new generative AI features for search, analysis, content generation and other tools.
  • The open-source AI platform HuggingFace announced a new MacOS app.
  • More than 100 companies have signed a new EU AI Pact to drive trustworthy and safe AI.

Other AI stories this week:

  • Business Insider used an AI-based paywall strategy to increase conversions by 75%. (Digiday)
  • A new report by 404 Media says Google’s AI-generated images of mushrooms could spread misleading and dangerous information.
  • “An Outsider Critiqued Meta’s Smart Glasses. Now She’s in Charge of Them” (Bloomberg)
  • EU antitrust chief Margarethe Vestager spoke with Axios about the Google adtech antitrust case, AI and other issues facing Big Tech.
  • Perplexity is reportedly in talks with top brands to introduce ads in Q4. (FT)
  • The San Francisco Chronicle debuted a new “Kamala Harris News Assistant” AI chatbot to inform readers about the candidate and the presidential election.
  • “Hacker plants false memories in ChatGPT to steal user data in perpetuity.” (ArsTechnica)
https://digiday.com/?p=556565

More in Media

Mastercard, Samsung and 7-Eleven are 2024 Greater Good Awards winners

The honorees of this year’s Greater Good Awards, presented by Digiday, Glossy, Modern Retail and WorkLife, recognize the importance of empowering communities and fostering economic opportunities, both globally and closer to home. Many of this year’s entrants and subsequent winners also collaborated with mission-driven organizations to amplify their efforts in education, inclusion and sustainability. For […]

Challenge Board: The platform era for publishers gives way to AI

At the Digiday Publishing Summit, publishers discussed the challenges they face, from traditional platforms like Facebook and Reddit as well as those posed by new AI platforms.

Retail media strategies

Microsoft looks set to shutter its retail media business

The July announcement of a ‘strategic collaboration’ with Criteo appears a precursor to a full migration and quiet shutdown of PromoteIQ.