YouTube is under fire again, this time over child protection

“Persistent identifiers” mean targeted ads are being served on YouTube channels specifically crafted for younger audiences, where such modeling is prohibited, without parental consent, according to a new study.

The findings mean YouTube will face yet more scrutiny over its policies with researchers observing that it served ads to minors, appearing to contravene its own policies, leading some to question over how rigorously it meets the requirements of the FTC’s Children’s Online Privacy Protection Rule.

COPPA requires service providers to obtain “verifiable parental consent” prior to using personal information from children and that “reasonable efforts” are made to notify guardians of such practice.

However, Adalytics Research alleges the installation process of the YouTube app creates a persistent identifier named “X-Goog-Visitor-ID” in a manner that makes it difficult for users to manage their consent choices. This “installation ID” is subsequently streamed to YouTube servers, and as a result, advertisers were serving targeted ads on ‘made for kids’ channels, according to Adalytics’ observations.

“YouTube’s CEO said in 2019 that the platform would stop serving personalized ads on ‘made for kids‘ content,” reads the report from the same research team behind a June study examining levels of transparency on the Google Video Partners network that sparked chagrin among media buyers.

Related Insights

Adalytics’ most recent report continues, “However, demographically and behaviorally personalized ad campaigns appear to have ads being served on ‘made for kids’ YouTube channels as of July 2023.”

The research outfit’s charges are based on observations made in YouTube and Google Ads campaign reports shared with Adalytics by multiple media buyers with many “Fortune 500 advertisers and major media agencies” on channels labeled as “made for kids.”

Brands observed in such instances include Mars, Procter & Gamble, Netflix, Apple, Ford, Colgate-Palmolive, Samsung, and many others.

Meanwhile, researchers observed that “dozens of major ad tech and data broker companies were observed receiving data from viewers of ‘made for kids’ directed YouTube videos who clicked on an ad – these include several companies who paid penal/es for COPPA related enforcements, such as Amazon, Facebook, Microsoh and OpenX.”

Furthermore, “Google’s Performance Max (‘pmax’) ad targeting algorithm appears to be placing adult brands’ ads on ‘made for kids’ YouTube channels; advertisers report they cannot audit this issue because pmax does not provide them with granular placement reports.”

For its part, Google challenges Adalytics’ findings – just as it did with the research outfit’s previously published report – with a Google spokesperson characterizing the research as “misleading” when discussing it with The New York Times, further outlining the online advertising giant’s defense.

The report will further intensify the scrutiny on Google as it still faces questions from trade bodies representing advertisers over Adalytics earlier report which found that ads intended for placement on YouTube were instead placed via GVP in a manner that caused discomfort for many media buyers.

All of this comes at a time when Google is preparing to face the sternest test in its history as the Justice Department levels antitrust charges against it, a trial is expected later this year, with a potential break-up in the offing.


After the initial publication of this story, a Google spokesperson contacted Digiday to issue the below statement to deny Adalytics’ assertions:

“This is the second time in recent weeks that Adalytics has published a deeply flawed and misleading report. Personalized advertising has never been allowed on YouTube Kids, and in January 2020 we expanded this to anyone watching ‘made for kids’ content on YouTube, regardless of their age. The report makes completely false claims and draws uninformed conclusions based solely on the presence of cookies, which are widely used in these contexts for the purposes of fraud detection and frequency capping — both of which are permitted under COPPA. The portions of this report that were shared with us didn’t identify a single example of these policies being violated.”

More in Media

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

A new bill called the COPIED Act aims to pass new transparency standards to protect IP and guard against AI-generated misinformation.

Media Briefing: Publishers reflect on ad revenue midway through 2024 

Some publishers say ad revenue is pacing 15% up year over year while others are still managing their expectations for how 2024 will shake out.

Teads is exploring sale options as M&A in ad tech heats up

Sources state the Altice-owned stalwart of outstream video has recently held talks with private equity and strategic players.