WTF is log-level data?
Gaining better visibility of what’s happening in the digital advertising supply chain goes hand in hand with knowing the right questions to ask. One question that vendors are being increasingly pressed for by publishers is: “Where’s my log-level data?” Ad buyers are also asking more for customized log-level data from exchanges.
But what exactly is log-level data, why is it necessary, and why is it difficult to get hold of?
Here’s all you need to know.
What is log-level data?
All data that’s relevant to a single impression. It could be geo data, URLs, cookie IDs, time stamps, viewability levels, and, of course, the good stuff: transaction data. It’s the transaction data more publishers are asking exchanges for. Armed with log-level data, a publisher can see exactly what is occurring in its digital ad supply chain. It can see details on what fee each vendor in the chain takes from the amount the marketer bids on the inventory. Log-level data can also show detail like whether an exchange is running multiple bids on inventory on behalf of the same client, when they shouldn’t be, in order to make their match rates look better and also means they could in theory duplicate their take rates.
Why do publishers want access to this more now?
To highlight any skulduggery in their digital ad supply chains. Having access to log-level data, given to them by their exchanges, enables them to look at what’s going on across their supply chain. Sometimes auction dynamics can be tweaked without the buy or sell side always knowing. For instance, when vendors switched to first-price over second-price auctions, publishers and agencies weren’t always informed. “We want to look under the bonnet at the full transaction data across our supply chain,” said Ryan Skeggs, gm of digital sports publisher GiveMeSport. “We want to know what their take rates are,” he added. The Guardian has also been proactive in requesting log-level data.
Why do ad buyers want access to it?
For buyers, it’s equally critical to have full log-level data, for more mixed reasons. On the one hand, it enables them to fine-tune their media planning if they have more specifics. “It helps us derive more insights so we can do things like understand not just attributed media but full paths to conversions or experiments on brand uplift and impacts of different partners, tactics and platforms,” said Matt McIyntre, head of programmatic for Europe, Middle East and Africa at Essence. It also helps them make more accurate planning around the reach and frequency of ads shown on an individual basis, he added. But it’s also useful for keeping an eye on any opaque or murky auction dynamics. For instance, after Index Exchange’s bid caching embarrassment, agencies requested log-file data to track whether it had been affecting them. Likewise, agencies like Essence have done the same to keep an eye on other auction dynamics changes that haven’t been declared — such as sudden shifts to first-price auctions, and use of dynamic floor pricing, according to McIyntre.
How do you get a hold of the data?
The most complete log-level data comes from the exchanges. There are some vendors who are making it available for free by request. Others have told publishers it will cost them a monthly fee, others have offered to give a small (like 1 percent in some cases) percentage of the data, which is useless for publishers. Others have said they’re hampered from sharing it due to contractual obligations with other partners. Publishers are skeptical of excuses though. “Exchanges are very good at scaremongering publishers about the cost of doing this. But publishers must just be persistent, because it’s not that hard,” said Skeggs.
So once you have the log-level data, then what?
It requires a place to store it, and some analysts to distill what the data means. Publishers and advertisers must be very specific about their objectives before they request what log-level data they want. We’re talking terabytes upon terabytes of data, so it’s worth customizing what is needed for specific objectives. Otherwise, it will be a confusing sea of information, which is costly to store and will take 10 times the amount of time to sift through.
Member ExclusiveDigiday Research: The coronavirus pandemic left marks on publishers’ 2021 revenue plans
While publishers remain focused on direct-sold ads and subscriptions, they seem less focused on diversifying revenue in 2021.
‘We had to take full ownership of data’: Why Denmark’s biggest news site cut reliance on Google’s tech
Denmark’s biggest news site Ekstra Bladet pushes ahead with its investment in first-party data with a homegrown sub for Google Analytics.
WTF is FLEDGE?
FLEDGE stands for 'First Locally-Executed Decision over Groups Experiment' and makes ad auction decisions in the browser, rather than at ad server level.
SponsoredWhat a content hub can do for marketing teams
In a truly effective marketing team, each team member is aligned, using shared tools and processes to efficiently create, collaborate and connect with their customers. With a content hub, marketers can break down the silos that have traditionally held them back, increasing collaboration in the crucial planning and workflow stages. Implementing this technology will make […]
Cheat sheet: Twitter’s acquisition of Revue heats up the battle of the inbox
The acquisition of Revue shows newsletter platforms will have to continue to ratchet up their efforts to deliver value to authors.
The New York Times’ Ben Smith saw the alt-right’s rise and sees a new era for social platforms
In the latest episode of the Digiday Podcast, the Times media columnist and former BuzzFeed editor-in-chief discusses misinformation on social platforms and why BuzzFeed didn’t make a big subscription push.