AI Briefing: Watermarking AI content doesn’t go far enough, researchers warn

As the tech industry rallies around watermarking AI-generated content and other commitments, some experts warn more work needs to be done.

In a new report from Mozilla, researchers suggest popular methods for disclosing and detecting AI content aren’t effective enough to prevent risks related to AI-generated misinformation. In an analysis released today, researchers note current guardrails used by many AI content providers and social media platforms aren’t strong enough to fight malicious actors. Along with “human-facing” methods — such as labeling AI content with visuals or audible warnings — researchers analyzed machine-readable watermarking methods including using cryptography, embedding metadata, and adding statistical patterns.

There are inherent risks related to scaled AI-generated content intersecting with the internet’s current distribution dynamics. Focusing on technical solutions could distract from fixing broader systemic issues such as hyper-targeted political ads, according to Mozilla, which also noted that self-disclosure isn’t enough.

“Social media, a key infrastructure for content circulation, both accelerates and amplifies its impact,” the report’s authors wrote. “Moreover, the well-documented issue of social media platforms algorithmically incentivizing emotional and agitating content could lead to a prioritization of synthetic content distribution, creating a ‘doubling down’ effect.”

According to Mozilla, the best approach is a combination of adding technical solutions and more transparency while also improving media literacy and passing regulations. Mozilla also brought up the European Union’s Digital Services Act (DSA), describing it as a “pragmatic approach” that requires platforms to implement measures without prescribing specific solutions to follow.

Instead of relying on watermarks, some companies are building their own tools for detecting AI-generated deepfakes and other misinformation. Pindrop, an AI audio security provider, has developed a new tool for detecting AI audio based on patterns found in phone calls. The tool, released last week, was made with a data set of 20 million AI audio clips analyzed via more than 100 different text-to-speech tools. Pindrop is known for detecting the recent deepfake robocall resembling President Joe Biden and also discovered audio deepfakes of Anthony Bourdain in the 2021 documentary “Roadrunner.”

The tech looks for audio abnormalities to distinguish whether a call is live or recorded, said Pindrop co-founder and CEO Vijay Balasubramaniyan. For example, Pindrop looks for spatial characteristics found in human language like fricatives, which aren’t easily replicated by recordings. It also looks for temporal anomalies such as the shape of a sound made by a mouth saying “hello” or the speed of a word like “ball.”

Although Pindrop claims to have a high accuracy rate, Balasubramaniyan knows it’s a “cat and mouse game,” adding that the key is how quickly companies can react whenever malicious actors move onto newer and better tools. He also noted that transparency and explainability of AI tools are also key.

“A deepfake system sounds like a big monolith, but it’s composed of a lot of individual engines,” Balasubramaniyan told Digiday. “And each of these engines leaves behind telltale features … You need to be able to dissect a deepfake engine into really granular parts and see what signatures each of those granular parts are leaving behind. So even if one portion of it gets changed, the others you still have.”

With prosed IPO, Reddit bets on LLMs

When Reddit filed paperwork last week for its proposed IPO, the company also revealed part of its plans for using large language models — and profiting from them.

In corporate filings with U.S. regulators, Reddit said its massive trove of user content will help train large language models. Reddit had more than “one billion posts and over 16 billion comments” at the end of 2023, according to the filing. That data could be used to train internal models, but will also be used as part of data licensing deals with other companies.

“Reddit data is a foundational piece to the construction of current AI technology and many LLMs,” according to Reddit’s S-1. “We believe that Reddit’s massive corpus of conversational data and knowledge will continue to play a role in training and improving LLMs. As our content refreshes and grows daily, we expect models will want to reflect these new ideas and update their training using Reddit data.”

Last week, Reddit also announced a new deal with Google, which will use Reddit content to train its AI models. Although it didn’t share the terms of the deal, Reuters reported Google will pay $60 million for access to its content. Reddit’s S1 also noted that the company expects revenue from data licensing in 2024 to be a “minimum of $66.4 million.”

Chatbots like ChatGPT, Gemini and Anthropic might also compete with Reddit’s main platform, the company said: “Redditors may choose to find information using LLMs, which in some cases may have been trained using Reddit data, instead of visiting Reddit directly.”

Reddit’s IPO filing also provides plenty of information about the company’s advertising business, which accounts for a majority of its revenue. In 2023, revenue totaled $804 million, a 21% increase over 2022’s $667 million.

Prompts and Products: Other AI News and announcements

  • Google said it is bringing its Gemini models to Performance Max. In addition to upgrading image generation, Gemini also will let advertisers generate long headlines and site links. Another upcoming feature will let advertisers generate lifestyle imagery via Performance Max along with variations for scaled campaigns. Google also faced criticism last week when Gemini’s image generator created images showing people of color wearing Nazi-era uniforms. (Google paused the tool and in a blog post about the issue promised to “do better.”)
  • Newsguard researchers said they’ve found more than 700 AI-generated websites masquerading as “news.” According to Newsguard, the websites are publishing misinformation and other harmful content in 15 languages. Many also profit from programmatic ads.
  • The privacy-focused browser Brave added more ways for its AI assistant “Leo” to help users read PDFs, analyze Google Drive files and transcribe YouTube videos. The updates came just a day after Adobe added a new AI assistant to Acrobat and Reader to help generate summaries, analyze documents and find answers.
  • Pfizer spoke with Digiday about how it developed a new generative AI marketing platform called “Charlie,” which was developed in partnership with Publicis Groupe and named after the pharma giant’s founder.
  • A group of adtech pioneers has a new AI startup for publishers, according to VentureBeat.

Input/output: Question of the week

Output: As tech companies build out their own large language models and various related AI tools, open source models are also playing a key role beyond closed platforms. Following Meta’s 2023 debut of its open-source Llama 2, Google debuted its own open-source models last week named Gemma. 

Input: If you’re using open-source models for use in marketing, media or commerce, email marty@digiday.com to let us know.

https://digiday.com/?p=535861

More in Media

Media Briefing: Efforts to diversify workforces stall for some publishers

A third of the nine publishers that have released workforce demographic reports in the past year haven’t moved the needle on the overall diversity of their companies, according to the annual reports that are tracked by Digiday.

Creators are left wanting more from Spotify’s push to video

The streaming service will have to step up certain features in order to shift people toward video podcasts on its app.

Digiday+ Research: Publishers expected Google to keep cookies, but they’re moving on anyway

Publishers saw this change of heart coming. But it’s not changing their own plans to move away from tracking consumers using third-party cookies.