Five seats left to attend the Digiday Media Buying Summit:

Join us Oct. 15-17 in Phoenix to connect with top media buyers

SECURE YOUR SEAT

Why the FTC is forcing tech firms to kill their algorithms along with ill-gotten data

The feature image shows an illustration of a man and a woman with thought bubbles—one with a set of demographic data and the other showing a coin.

The Federal Trade Commission is punching right at the heart — and guts — of how data collection drives revenue for tech firms: their algorithms. 

“I anticipate pushing for remedies that really get at the heart of the problem and the incentives that companies face that lead them into the illegal conduct,” FTC commissioner Rebecca Slaughter told Digiday in an interview last week. 

Slaughter pointed to two cases that reflect what we might see more of from the agency. When the FTC in May settled its case against Everalbum, maker of a now-defunct mobile photo app called Ever that allegedly used facial recognition without getting people’s consent, the agreement featured a new type of requirement that addresses the realities of how today’s technologies are built, how they work and how they make money. Along with requiring the firm to obtain express consent from people before applying facial recognition to their photos and videos and to delete photos and videos from people who had deactivated their accounts, the FTC told Everalbum there was another “novel remedy” it must abide by: it would have to delete the models and algorithms it developed using the photos and videos uploaded by people who used its app.

Put simply, machine-learning algorithms are developed and refined by feeding them large amounts of data they learn and improve from, and the algorithms become the product of that data, their functions being a legacy of the information they consumed. Therefore, in order to make a clean sweep of the data that a company collected illicitly, it would also have to wipe out the algorithms that have ingested that data.

Cambridge Analytica case laid groundwork for algorithmic destruction

The Everalbum case wasn’t the first time the FTC had demanded a company delete its algorithms. In fact, in its final 2019 order against Cambridge Analytica, alleging that the now-infamous political data firm had misrepresented how it would use information it gathered through a Facebook app, the company was required to delete or destroy the data itself as well as “any information or work product, including any algorithms or equations, that originated, in whole or in part, from this Covered Information.”

Requiring Cambridge Analytica to delete its algorithms “was an important part of the outcome for me in that case, and I think it will continue to be important as we look at why are companies collecting data that they shouldn’t be collecting, how can we address those incentives, not just the surface-level practice that’s problematic,” Slaughter told Digiday.

The approach is a sign of what companies in the crosshairs of a potentially more-aggressive FTC could have in store. Slaughter said the requirement for Cambridge Analytica to kill its algorithms “lays the groundwork for similarly employing creative solutions or appropriate solutions rather than cookie-cutter solutions to questions in novel digital markets.”

Correcting the Facebook and Google course

It’s not just Slaughter who sees algorithm destruction as an important penalty for alleged data abuse. In a statement published in January on the Everalbum case, FTC commissioner Rohit Chopra called the demand for Everalbum to delete its facial recognition algorithm and other tech “an important course correction.” While the agency’s previous settlements with Facebook and Google-owned YouTube did not require those firms to destroy algorithms built from illegally-attained data, the remedy applied in the Everalbum case forced the firm to “forfeit the fruits of its deception,” wrote Chopra, to whom the FTC’s new reform-minded chair Lina Khan formerly served as legal advisor.

Slaughter’s stance on forcing companies to kill their algorithms, also addressed in February in public remarks, has caught the attention of lawyers working for tech clients. “Slaughter’s remarks may portend an active FTC that takes an aggressive stance related to technologies using AI and machine learning,” wrote Kate Berry, a member of law firm Davis Wright Tremaine’s technology, communications, privacy, and security group. “We expect the FTC will consider issuing civil investigative demands on these issues in the coming months and years.”  

Lawyers from Orrick, Herrington and Sutcliffe backed up Berry’s analysis. In the law firm’s own assessment of Slaughter’s remarks, they said that companies developing artificial intelligence or machine-learning technologies should consider providing people with proper notice regarding how their data is processed. “Algorithmic disgorgement is here to stay in the FTC’s arsenal of enforcement mechanisms,” the lawyers said.

More in Media

In the AI dealmaking rush, Trusted Media Brands is at the table but holding back

Trusted Media Brands is in talks with big tech on AI licensing, but delaying signing deals to avoid giving away content without clear terms.

Inside The Economist’s plan to grow revenues in a post-search, AI-driven future

The publisher is investing in formats that are more difficult for machines to mimic, like video and audio, while holding a hard line against licensing deals with AI firms it views as competitors. 

A measuring tape slightly open with eyes on the measure. Representing measurement for omnichannel strategies.

Discord puts proof behind its ad pitch with first measurement push

The measurement test comes a year and six months after Discord rolled out Quests, and is the latest signal that it is going all-in on measurement in 2025 as it moves from experimental campaigns to something advertisers can trust at scale.