Vox Media eyes sandboxing to tamp down on malicious ads

Vox Media is investigating how a new weapon might eliminate malicious ads from its sites by 2019. Earlier this month, the publisher of sites including SB Nation, Vox and The Verge began testing sandboxing, a technique that loads its sites’ advertisements in separate windows, called iframes, to ensure the ads can’t do things like redirect people’s browsers to different websites.

The test was a response to a deluge of malicious advertisements that hit Vox Media’s sites, which automatically redirected visitors to a page that asked them to hand over personal information.

Publishers have been playing a game of whack-a-mole with fraudulent, malicious and scam advertisements for years.

“We’d had enough of it,” said Dave Pond, Vox Media’s gm of display and programmatic. “In the past, publishers have relied on their tech partners, who have helped them monetize and understand that part of their business. We’ve reached a stage where they’re not necessarily doing enough.”

By and large, sandboxing is an effective method of damage control, according to ad fraud researcher Dr. Augustine Fou. But it comes with opportunity costs. For example, advertisers dislike the technique because, if deployed a certain way, it limits the ability to verify that their ads run where they were supposed to. Sandboxing can also prevent users from clicking through on ads, rendering a huge swath of display inventory useless.

An IAB-sanctioned version of sandboxing called SafeFrame, which Vox Media is also testing, solves some of those problems. It is also discussing alternatives to sandboxing with some of its biggest advertisers. If an advertiser were to agree to buy Vox Media inventory using a private marketplace, for example, it may remove the sandboxing, thereby giving advertisers more flexibility.

In testing this technique, Vox is effectively taking money out of its own pocket, a short-term cost it says it’s willing to endure for the time being.

“It’s a better long-term play,” Pond said. “We’re a premium site, and our users are the most important part of that business.”

Sandboxing doesn’t guarantee protection. The worst actors in the ad tech ecosystem will sometimes change how offending advertisements look and behave during periods of the day when publisher ad ops teams are likely to be away from their desks. An ad that looks and functions safely from 9 a.m. to 5 p.m. may behave very differently outside of normal business hours; mobile redirect ads are served often on evenings and weekends.

The sandboxing test supplements other efforts by Vox Media to combat this problem. It has a dedicated Slack channel, Twitter account and email address where site visitors can  send examples of scammy or malicious advertising; a team of  ad operations specialists spends chunks of its day hunting for these ads and addressing them.

Vox is taking this fight to the programmatic ecosystem’s other players. Last year, it made scorecards for each of the advertising exchanges it participates in, giving each of them a benchmark volume of scammy ads that it finds unacceptable. Those that can’t limit the volume of bad ads they distribute may find themselves losing Vox as a partner.

“We believe in the pipes,” Pond said. “But we sat on the sidelines a little too long.”

https://digiday.com/?p=273259

More in Media

Creators are left wanting more from Spotify’s push to video

The streaming service will have to step up certain features in order to shift people toward video podcasts on its app.

Digiday+ Research: Publishers expected Google to keep cookies, but they’re moving on anyway

Publishers saw this change of heart coming. But it’s not changing their own plans to move away from tracking consumers using third-party cookies.

Incoming teen social media ban in Australia puts focus on creator impact and targeting practices

The restriction goes into effect in 2025, but some see it as potentially setting a precedent for similar legislation in other countries.