Viewthrough & Incrementality Testing

A common question in digital marketing measurement is “what was the lift?” or “what is the incremental benefit?” from a particular promotional campaign. Most of us would agree that the alternative of relying on display ad clicks alone for display response measurement is just wrong. When measuring display media, incrementality testing is absolutely essential to properly gauge the baseline viewthrough effect.

How to Calculate Incremental Cost Borrowing thumbnail

In order to answer this question, a test and control methodology (or control and exposed) should be used, i.e. basic experimental design where a control group is held out that otherwise are identical to the group that is being tested with the marketing message. This is even more important when marketing “up the funnel” where a last click or even last touch measurement from a site analytics platfom will mask impact.

Email marketers have been doing this with their heritage in direct marketing technqiues. It is often pretty straight forward as the marketer knows the entire audience or segment population and holds back a statistically meaningful group; this will enable them to make a general assertion about what the campaign’s actual lift or incremental benefit is. Control and exposed can also be done with display media if the campaign is configured properly to split audiences, elimnate overlap and show the control group a placebo ad. Often PSAs (public service ads) are used, which can be found via the AdCouncil.

This technique is routinely used for qualitative research, i.e. brand lift study services like Vizu, InsightExpress, Dynamic Logic and Dimestore. It is the best way to isolate the impact of the advertising; read more about the challenges of this kind of audience research in Prof. Paul Lavrakas study for the IAB.

Calculating Lift and Incrementality

Dax Hamman from Chango and Chris Brinkworth from TagMan were recently kicking around some numbers to illustrate how viewthrough can be measured; some of that TOTSB covered a while back in Standardizing the Definition of Viewthrough For the purposes of this example, clickthrough-derived revenue will be analyzed separately and fractional attribution will not be addressed. In this example, both control and exposed groups are the same size though this can be expensive and is usually unnecessary using statistical sampling.

  • Lift is the percentage improvement = (Exposed – Control)/Control
  • Incrementality is the beneficial impact = (Exposed – Control)/Exposed

In addition to understanding the lift in rates like viewthrough visit rate, conversion rate and yield per impression by articulating incrementality rate the baseline percentage is revealed – it is just the reciprocal of incrementality (100% = incrementality % + baseline %). Incrementality or incremental benefit, can be used to calibrate other similar campaigns viewthrough response – “similar” being the operative word.

Executing an Incrementality Test

PSA studies are simple in concept but often hard to run. Some out there advocate a virtual control, which is better than no control but not recommended. This method does provide a homegenous group from an offline standpoint so if all things being equal TV and circular are running then it is safe to assume both test and control should be exposed to the rest of the media mix equally. ComScore even came up with a clever zero-control predicted metholdology for their SmartControl service.

Most digital media agencies have experience designing tests and setting up ad servers with the exact same audience targeting criteria across test and control. Better ad networks out there encourage incrementality testing and will embrace the opportunity to understand their impact beyond click tracking.

Was this helpful? If so, let me know and I’ll share more.

5 thoughts on “Viewthrough & Incrementality Testing

  1. ARJUN KM

    Very knowledgable Thedom. You can participate in the webinar which is being hosted by Infosys BrandEdge on July 18, 2012 about ‘Accelerating Global Digital Marketing’ which can you more insights about current trends in digital marketing.
    You can follow the link below to register
    http://bit.ly/PLkpoM

  2. TV James

    This was helpful, thanks. I knew the answer (from Adobe Target) but I wanted to calculate it myself. I found to match Adobe, I had to add “0-” (zero minus) to the front ot the equations – the calculation was showing negative where Adobe had positive and vice-versa. Not questioning your calculation, questioning my understanding. (Can you help me to learn more about why these calcuations vary?) Thanks.

    1. dgtassone@mobilito.com Post author

      No idea about that…Adobe Target is normally used for landing page optimization and multivariate testing. Were you doing that or a display media test?

Leave a Reply