As agencies invest more in buying ads via exchanges or real-time bidding, there’s higher demand for more data to help judge these impressions prior to purchase. To satisfy this demand, we’ve seen a flurry of announcements about “pre-bid” data — data that gets passed to agency buying platforms before a bid is placed, giving the buyer some indication of the impression’s visibility or the site’s likelihood to draw engagement.
Arming buyers with pre-bid data is an increasingly important concept for the publishing community, especially the premium publisher set. Yet many of the technologies evaluating impressions are often using publishers’ least desirable ad placements as a proxy for all of the inventory available on their sites. The new market intelligence that agencies use for making decisions within the RTB/exchange market is potentially damaging to premium publishers, and I’m not sure publishers are paying enough attention.
The momentum behind pre-bid screening is mainly due to the practice of subpar website design. Many publishers have been guilty of bumping revenue by placing lots of ads at the bottom of their pages. The buy side realized that consumers never see a very large percentage of the inventory, yet they still paid the same rates for those impressions.
The growth in exchanges means there is now enough data available to scale and pass a site’s viewability and engagement scores on to the buyer before a bid is made. A number of technology companies developed tools to score these criteria, creating an index of viewability and other engagement metrics.
But most of the impressions in the exchanges come from publishers’ remnant inventory, which often includes those ads at the bottom of the pages. This is much less desirable and typically is not a premium position, placement or audience that would lend itself to higher engagement. While this was a quick revenue hit for publishers in the past, pre-bid data derived from these less desirable placements creates a false negative about a domain’s ability to deliver desirable and visible ads. The new technologies are scoring sites based only on their worst products, and that’s only going to hurt them.
Publishers need to think of this as if they’re selling cars instead of media. No salesman is going to put used cars in the showroom, because the point is to showcase your best stuff. Yet premium publishers are showing used cars first, and that’s the product defining the new market intelligence.
There are arguments to be made on all sides of this new development. Will growth in pre-bid data force publishers’ hands toward better site design? Will publishers build relationships with verification companies to map their URLs for each and every impression? Will buyers use these new pre-bid indexes as a proxy for premium and only buy sites that have a high engagement history? What if that engagement history is based on a limited set of publisher ad products?
The list of questions goes on, but either way, this new development is an important part of the business of buying and selling, with ramifications that will be felt far into the future. Right now, it’s unclear if publishers are giving this the necessary attention or have the necessary tools and resources to manage it correctly.
Mario Diez is CEO of QuadrantOne. Follow him on Twitter @mariojdiez.
Image via Shutterstock
More in Media
Creators are left wanting more from Spotify’s push to video
The streaming service will have to step up certain features in order to shift people toward video podcasts on its app.
Digiday+ Research: Publishers expected Google to keep cookies, but they’re moving on anyway
Publishers saw this change of heart coming. But it’s not changing their own plans to move away from tracking consumers using third-party cookies.
Incoming teen social media ban in Australia puts focus on creator impact and targeting practices
The restriction goes into effect in 2025, but some see it as potentially setting a precedent for similar legislation in other countries.