Brian Fitzgerald is president and co-founder of Evolve Media
Viewability, on the face of it, is a boon to publishers. Theoretically it creates more value for brands as brands will pay a premium for a viewability threshold or only pay for viewable impressions, channeling more dollars into digital. I welcome viewability as a currency because for too long publishers have been staving off the impact of the declining CPM by simply adding more and more ads to each page. This isn’t good for the user or the marketer. This market-led initiative on viewability will certainly render a lot of this activity moot, as publishers won’t get paid for what is more often than not unseen creative. This is a good thing and will effectively weed out bad actors.
If that were the extent of the impact, then I, and many other large publishers like me, would be perfectly happy. However, viewability in its current form needs to mature before it is fully embraced as a buying requirement. Regardless of vendor, whether accredited or not, viewability is not ready for prime time. Publishers with clean, above-the-fold inventory are getting penalized and not paid for legitimate ads running in-view.
Despite its best intentions, the Media Research Council did not solve viewability when it lifted its advisory in March. What the MRC has done is merely pave the road for technology vendors to prey on confusion and fear. The issue of measuring viewability is not an easy one to solve. Even the MRC has acknowledged that it is accepting any of six common methods (two seem to be more prevalent with vendors), each with different approaches to technical measurement and validation. The MRC has approved a handful of accredited vendors, not all of which track or measure on the same technical methodology.
As part of its accreditation process, the MRC recently reported that there was a 50 percent difference in counts among accredited vendors. This concerned us.
Committed to wanting to understand our viewability score and how our inventory would fare in an audit, we embarked on a test to determine what was measurable and in-view across our sites. We worked with a brand advertiser client on a large campaign and ran a test with three accredited viewability vendors. We took the same client campaign, same ad creative, running across the same sites, in the same ad placements, and tagged them all with the three accredited vendors.
These sites had only above-the-fold ad placements (the 728 x 90 ads were even below the site’s navigation), so the initial thinking was that our viewability scores should be fantastic. They were not. Our viewability scores ranged from 38 percent to 88 percent, depending on the vendor. The average was 68 percent. It should have been closer to 95 percent, taking into account that some page calls wouldn’t render ads fast enough or the user would bounce before an impression could qualify as a viewed impression. We get it. But 68 percent? How could accredited vendors, utilizing the same or similar methodology, produce viewability scores so wildly disparate? That is very concerning.
Now the market averages fluctuate and publishers are working hard to address ad tag, design, style sheet and other framing issues in an effort to improve their measurability and resulting viewability. That said, though, several sources state that the current market average now is 44 percent, and even premium publishers, like those in the OPA, are averaging 55-68 percent. So while our average viewability score is good, it is nevertheless concerning that I may be asked to only accept payment on 68 percent of my inventory without a universally standardized metric backing that claim up. What’s even more concerning is that, depending on the vendor selected by the client, I may arbitrarily be asked to only accept payment on 38 percent of my inventory. That’s untenable.
As publishers, we have no ability to challenge the appointed vendor, question the diligence that went into independently vetting the vendor, the methodology the vendor uses to measure viewability, or even challenge the findings. The vendors don’t have a vested stake in ensuring that the findings are accurate. They market their services to the client or agency, and then the client or agency hands down an edict to the publisher to pay for the service and be consequently penalized by its findings.
Over time, the technology will improve, and the industry as a whole will become more experienced at transacting on a viewable currency. Until then, publishers must test the various vendor technologies on their sites and must have a voice and a choice in the vendor being utilized. We have an informed point of view that can help drive even greater success for our clients and agencies while managing the impact to publisher revenue. Over time, such discussions will result in vendors winning and losing based on the performance of their methods and products across the marketplace.