Three seats left to attend the Digiday Media Buying Summit:

Join us Oct. 15-17 in Phoenix to connect with top media buyers

SECURE YOUR SEAT

Ad quality isn’t a checkbox — it’s a context

Asaf Shamly, co-founder and CEO, Browsi

If you ask someone in ad tech what makes an ad “quality,” chances are they’ll fire off a familiar answer:

“Viewable.”

“Human.”

And just like that — it’s case closed.

But consider this claim: The above barely scratches the surface.

Yes, for years, these two metrics — human and viewable — shaped how advertisers measure, price and evaluate ad placements, treating them as matter-of-fact, not pausing to question them.

But here’s the thing: The very fact that an ad meets the standard doesn’t mean it made an impact. It doesn’t even guarantee it was actually seen.

The ad ecosystem has matured. So have advertisers’ capabilities. But the metrics that are relied on? They’re still stuck in a world where “technically visible” is good enough. It’s time the industry revisits what quality really looks like and what signals actually indicate whether an impression had impact.

Advertisers shouldn’t be paying for technical nuances. They should be paying for outcomes.

The trap of bare minimums

Let’s start with the one-second standard.

Fifty percent of an ad’s pixels, in view, for one full second — that’s the industry benchmark.

But advertisers know what really happens. If someone scrolls past a banner without slowing down, did they see it? When a driver speeds down the highway and passes a billboard — did they really see it?

So yes, technically it was in view. But did the advertisement land? Not necessarily. And somehow, this is what most advertisers keep optimizing for. Is it because it drives results? No. It’s simply the metric that gets reported.

And it doesn’t have to stay this way. In fact, there are signals that, as an ecosystem, advertisers are guilty of underutilizing — data points that can give them a much better idea of quality. Data that can actually reveal whether an ad had the conditions to deliver impact.

5 signals that tell a fuller (better) story than ‘viewable’ and ‘human’

Let’s break them down:

1. Time in view: The moment an ad becomes viewable, the timer begins. However, time in view tells advertisers how long the timer actually ran. Ads that linger for three, seven or 20 seconds have a much better chance to make an impression than those that disappear in one. And the kicker? This metric is knowable. Platforms can track it. Publishers can report it. Advertisers just haven’t prioritized it.

2. Refresh rate: Ad slots that auto-refresh every few seconds may look good on a spreadsheet, but they often dilute attention and inflate impression counts without delivering real exposure. The more refreshes, the more noise. And advertisers may not even realize they’re buying recycled inventory.

3. Engaged users: Was the user doing anything while the ad was showing — scrolling, clicking, even moving their mouse? Passive exposure is one thing. Presence during exposure is another. It’s not necessarily intent, but it’s a stronger signal than a static scroll-past.

4. Ad density: How many ads compete with each other on the page? A higher ratio of ads to content can hurt every single unit’s performance. More ads don’t equal more impact. In fact, it’s usually the opposite.

5. Ad clutter (per fold): This one’s simple: five ads jammed into the same screen space means none of them stand out. Even well-designed creatives lose meaning in a cluttered fold.

Together, these five signals paint a more honest picture of ad quality — not as a checkbox, but as a context.

Attention is a finite resource

Let’s be blunt: Beyond an ad quality problem, advertisers have an attention problem.

Every year, ad environments get more crowded. Pages fill with banners, popups, interstitials, autoplay videos — and the average user’s tolerance shrinks in parallel.

The brands that win aren’t the ones shouting the loudest. They’re the ones earning attention where it’s actually available. And that means measuring attention.

Contextual quality matters more than ever. Because even the best creative, seen for a full 15 seconds by an engaged user, can flop if it’s crammed into a noisy, chaotic environment. And a perfectly targeted placement loses value if it refreshes too soon, gets buried by clutter or reaches someone who has already left the page.

This is the performance gap. It’s where the data says advertisers are doing well — but where the outcomes tell a different story.

Redefining quality for the era of accountability

If advertisers are serious about bridging the gap between what’s measured and what actually drives results, they have to move past defining quality as just viewable and human.

They need a definition that accounts for:

  • Time: Was the ad given enough seconds to matter?
  • Environment: Did the placement respect the user experience?
  • Engagement: Was the user present, or had they already checked out?
  • Inventory integrity: Were refreshes inflating exposure?
  • Attention signals: Was anything actually seen?

And advertisers need media platforms and verification tools to start surfacing these metrics — not as advanced analytics, but as table stakes.

Clarity equals performance

Advertisers can’t afford to optimize blindly. Quality impressions require more than minimum thresholds. They require a real understanding of the setting, the user state and the chances of real actual impact.

The good news? The data is there. It’s trackable. It’s reportable. And with the right signals? It’s actionable.

Clarity can (and should) be treated as a competitive advantage. The details make or break the difference between quality and technicality. And if advertisers start there, they can stop lying to themselves.

Partner insights from Browsi

More from Digiday

Mitigating ‘Google risk’: The Independent maps four-pillar growth plan for the AI era

The Independent has built its growth strategy around the “blue links risk” and has stopped measuring its success by audience reach.

A red-toned GIF of a TV screen with shifting static in pink, yellow, and red, representing the evolving landscape of biddable CTV and its dynamic, data-driven ad opportunities.

Future of TV Briefing: WTF is IAB Tech Lab’s Concurrent Streams API?

This week’s Future of TV Briefing looks at how IAB Tech Lab’s Concurrent Streams API aims to address one of the fundamental challenges facing the programmatic supply chain as more major live games and events are streamed.

How FanDuel’s Amazon Prime NBA sponsorship extends beyond the court logo

Fanduel is the first sponsor of Amazon Prime’s NBA coverage. Its innovative approach could set the pace for future partnerships.