The blind spot: Why testing creative often fails

We’re all familiar with the original promise of digital: to deliver the right message to the right audience in the right place at the right moment. A tsunami of data and analytics has brought us closer to that dream – but there’s still a lot of work to do. To date, our focus has been on applying data-driven thinking to the media side of the equation: the sites, strategies, and placements. We now need to look beyond the medium and focus on the message itself – the creative strategy, execution, concept and version.

The medium and the message should work together, like the directional edges of a saw, to achieve the advertiser’s goal. If a saw only cuts one way, it isn’t very efficient. Solid testing will show you if the relationship between the two is working.

Time to evolve our metrics
To correctly measure both the medium and the message, you need to re-think your approach to analytics. For most advertisers, this is the blind spot. They still use the wrong approach, combing through outdated media metrics to assess their creative.

Let’s start with the dependence on the click-through rate as an indication of performance. Think about it: Do you really expect an intelligent, time-starved customer to stop whatever they’re doing and respond to a banner ad? If not, why is that the key performance metric for creative? The purpose of display media is to create brand awareness and consideration. It is not a direct response vehicle. A display ad that gets zero clicks can still be very effective. You just have to measure it correctly.

Then there’s the reliance on the last touch (post-view or post-click) as a KPI. Giving 100 percent credit to the last impression served makes very little sense in light of the marketer’s real objectives. It assumes that only the last ad matters, penalizing every interaction that preceded it.

It’s a bit like awarding only one medal to the 4th swimmer on a winning relay team: If you only have one medal to give, cut it up and allocate pieces based on each swimmer’s lap time. The fastest time gets the biggest piece. This is the basis for algorithmic, fractional attribution. It’s also a logical approach to measuring creative – just as you do your media.

Checking the math
An agency client of ours was concerned. Their last-touch attribution showed that static ads were outperforming the more expensive dynamic ads. To discover why, Encore by Flashtalking completed an analysis of static versus dynamic creative for one of their campaigns.

We found that static ads were more prevalent low-funnel, whereas dynamic ads were served higher in the funnel. Additionally, almost 40 percent of last-touch conversions from static ads were preceded by dynamic ads which were scored much higher by machine-learning attribution models.

Now, according to fractional attribution, the dynamic ads actually had a 25 percent lower CPA than the static ads. That is to say, dynamic generated 33 percent more conversions for the same budget.

When testing your creative, (sample) size matters
Misperceptions like these often arise when relying on results from a small control group. The rules don’t change when testing creative: You need enough impressions within the control and test groups to achieve representative results. Allocating 5 percent of impressions to the control group (and 95 percent to test) is not likely to yield a meaningful comparison. The control group will not have enough reach and frequency, setting you up to chase outliers.

But even with large, representative groups, the test is often rigged from the start, as performance of the creative is highly influenced by the quality of the media placement. Bad placement trumps good creative: A great ad served to the wrong audience (or buried at the bottom of a page) is not likely to perform well regardless of how inspired the creative. Conversely, mediocre creative in a great placement can perform very, very well.

As advertisers start to personalize creative to match the context of the placement, they make it hard to compare and even harder to determine if it was the creative or the media that influenced the outcome.

ABT: Always be testing
To measure creative effectively, you need to construct good tests: test and control groups with a consistent and representative mix of sites, strategies and placements. You need to have an “always testing” approach and be willing to invest a small part of your budget to learn.

When all is said and done, successful advertising is achieved by marrying media and creative for each audience. By properly measuring and optimizing both sides of the equation, advertisers can eliminate that blind spot in their advertising program and do a better job of delivering on the promise of digital.

Learn more about creative analytics.

https://digiday.com/?p=193382

More from Digiday

A look at Digiday’s most popular WTF explainers in 2024

A look at Digiday’s most popular WTF explainers in 2024.

2024 in review: A timeline of the major deals between publishers and AI companies

Here’s a list of all the major deals signed between publishers and AI tech companies in 2024.

Marketers balance creepiness and realism as more AI-generated avatars come online

It’s now possible to generate avatars in minutes using audio, images or videos and produce content with hundreds of different backgrounds, outfits, tones and languages or gestures. Others use virtual influencers or animated characters – but either way, do you as a marketer aim for realism or steer clear of the uncanny valley?