10-year-olds think they know everything about everything. In a family discussion, my son spoke up and claimed he could easily do my job because “it’s just about teaching people that you can’t measure online advertising with a ruler.” 

This description got me thinking about the click “ruler” that many advertisers try to force on display campaigns. That almighty click, which was invented to drive a profitable business model on search over 20 years ago for search engine marketing, is typically not a good metric for measuring display static or video ads.  

Metrics for success in search

Let’s start by understanding search. There are two major ways that search success is measured:

SEO: Short for ‘Search Engine Optimization,’ this is the practice of increasing the quantity and quality of traffic to a website through organic search engine results to give higher rankings in search engine result pages (SERPS) for increased visibility.

SEM: Short for ‘Search Engine Marketing,’ this marketing tactic uses paid media advertising to give websites higher rankings in search engine result pages (SERPS) for increased visibility.

And within that, the click model was created:

CPC: Cost Per Click on an advertisement. Price paid for each click generated. Also known as “pay-per-click.” Calculated as the total cost (or budget) / clicks.

CTR: Click-Through Rate of an advertisement. A core metric of the click model, this is the ratio of times an ad is clicked on, compared to the number of times an ad is shown. Calculated as clicks / impressions.

Clicks are the basis of search. This search audience is actively seeking links for their next browsing destination, expecting to click when they find what they are looking for. Marketers can easily assess the value of various search terms to drive consumers to the page where they will make a purchase, and bid on them accordingly. Because a search ad relates to an intent expressed by the person doing the search, click-through rate (CTR) serves as a good proxy for conversion, and cost per click (CPC) is a reliable guide for buying search ads.

Why not clicks for display, since it works for search? 

To really explore this, we need to understand why clicks work for search. Search resonates with consumers because it is an easy way to find information, answer a question, or reach a website to make a purchase. If actively participating in search, a consumer is in a mindset of seeking out a product or service, and therefore they are in the mindset to click through to get information. 

It’s easy to want to take these same metrics and apply them to display, but why are clicks actually the wrong choice of measurement as a means of assessing display ad effectiveness? 

The answer comes back to the customer mindset while viewing the ad. In display ads, especially programmatic ones, the pure goal is to be placed where the consumer is. However, that doesn’t mean that when a consumer is reading an article in Forbes magazine, they will stop and click-through to an automatic placement or that those clicks are intentional. In fact, a 2021 eMarketer study estimated that the average CTR on display ads is only 0.47%, so measuring success off of CTR alone is significantly limiting measurement of the customer audience. In addition, a CNBC article in 2017 found that 50% of those clicks are accidental, so in the click model, advertisers are valuing users less likely to convert and become high-value customers. There are better metrics to drive incremental sales and revenue on display than optimizing against clicks.

Display isn’t that simple

Display advertising isn’t cut and dry. Site visitors navigate to content sites with the intent to stay, not click away. A 2021 eMarketer study found that 55% of US adults said they were “not at all likely [to] click through on a digital ad relevant to [their] interests.” That doesn’t mean the ad didn’t have an impact; the consumer was just engaged with the content they were viewing and not ready to click away at that moment. 

Since the ultimate goal is purchase and the incentives we set drive the outcomes we receive, we should carefully look at how we define success. The definition of conversion is quite simple–it’s essentially tracking and valuing a specific action taken:

Conversion: An action or event that can be counted–such as a purchase, sign-up, or download. Conversions drive optimal business outcomes for a performance campaign.

But when we break this down into the two types, definitionally, we see they are supporting very different outcomes:

Click Conversion: “Click” or “click-through” conversions only count when a user has clicked on the ad shown for that impression. Click conversions can be tracked only on “last click,” meaning the most recent in time before the conversion event.

View Conversion: “View” or “view-through” conversions are counted when a user has viewed an impression irrespective of if they clicked. View conversions can be attributed to numerous partners.

Setting the Right Incentives

This is where the simplicity of the ruler can come back in and we can use our gut to evaluate what truly is driving the outcomes we want. If we stand back and evaluate, there are two key takeaways:

1. The clicker audience isn’t that valuable: When was the last time you clicked on an ad? 

The last time my younger daughter did, she accidentally signed up for a $5-a-month app she had no intention of using. It was a hassle, but we convinced our mobile network provider to remove the fee from our bill. Some direct response (DR) campaigns do drive clicks if the call to action (CTA) is to sign up for something valuable to the consumer, but for the most part, clicks aren’t driving revenue conversions. 

A Dunnhumby study in 2017 and 2019 visualized what we already suspected–there is no correlation between clicks and conversions; when customer spend and CTR were charted, it created this randomized scatter plot: 

Looking at the Quantcast campaigns from 2021, we actually see even less correlation between CTR and conversion. In fact, the Pearson correlation coefficient for our return on ad spend (ROAS) campaigns was -0.12%. This means that there is very little correlation between these variables.  Additionally, the correlation that exists is an inverse relationship–meaning, as clicks go up, conversion likelihood decreases. When we look at our key campaigns’ scatter plot, there is no clear pattern, similar to what the Dunnhumby studies found in 2017 and 2019. What we do notice is that dots are much closer to the 0% CTR in 2021 than they were during the earlier studies. This indicates the CTR rates are declining at a significant rate, which is further evidence that CTR should not be used as a proxy for conversion rates, especially among ROAS campaigns. Both our campaigns targeting reach and CTA should have a Pearson correlation of -0.01 and -0.02, respectively, and these additionally indicate CTR is not a proper metric to predict campaign success. 

2. Eyeballs are engagement: When is viewing an ad more important than a click?

I recently spoke to my teenager about how she views ads while on her devices. She shared how excited she was to see a La Roche Posay ad on YouTube. As a skin-care beauty junkie, she was eager to purchase the newest addition to their product line and immediately asked her father to buy it for her. Two things struck me: she liked the personalization, which was a shift from conversations I had a decade ago about “big brother.” The second, more relevant measurement challenge, was that her reaction wasn’t to click on the ad or even to come back to purchase online later, but viewing the ad had, in fact, aided in her customer journey to purchase the product offline.

Measuring True Impact

If clicks aren’t a valuable metric in display and view-through conversions only tell part of the story, how do we truly measure success? Going beyond my teenager’s personal experience, we at Quantcast believe in incrementality:

Incrementality: Measures the impact of a single variable on an individual user’s behavior. 

To measure this, Quantcast has been running campaigns to demonstrate how well our digital media is working, irrespective of whether someone clicked or viewed an ad. To do this, we set up:

Incrementality Testing: Measuring how a specific marketing event (e.g., site conversion) was causally influenced by a media channel or tactic, in this case display, over a set time period and budget to isolate the impact of a particular (digital) advertising tactic on sales. This methodology involves serving a Public Service Announcement (PSA) placement to a control group and a branded ad to the test / exposed group and comparing the results. For this approach, if the control group is randomly selected, both groups should be behaviorally, demographically, and psychologically similar with the only difference between the groups being which ad they viewed.

Test / Exposed Group: Within display incrementality testing, the audience receiving the branded ad. 

Control Group: Within display incrementality testing, the audience receiving the PSA ad. 

Within our Q2 2021 Quantcast incremental studies, we are seeing incremental lifts between 5% and 36% on our recent campaigns. This methodology of applying a PSA control group has been used across Facebook, Google, and Epsilon, who also rely on this approach.

Clicks to a Customer Journey

Ultimately, digital advertising cannot be ruled by the ruler. For marketers to drive the right outcomes, they need to set the right incentives. Incrementality can act as the first step in measuring the true impact of an ad on conversions, irrespective of clicks. 

Learn More

For more on incrementality testing, see our blog post here. To understand how to integrate this measurement into your next campaign, reach out to your account manager or contact us

This blog post is the second in a series, highlighting terms from our Ad Tech Glossary of Terms, a handy reference tool for reviewing terminology on your own. If you missed the first post, The ABCDs of Ad Tech: Audience, Behavioral, Cookie, and Data, you can read it now.