Category Archives: branding

DAA Chicago Symposium 2013

Connecting the Dots: Optimizing the Customer Experience in an Omnichannel Worldhttp://media.marketwire.com/attachments/201203/22739_DAALogo_VertRGB.jpg

Tuesday, September 17, 2013
10:30 – 12:00pm Student & Entry Level Primer
12:00 – 1:00pm Registration, Networking & Exhibit Browsing
1:00 – 5:30pm Symposium
5:30 – 7:00pm Cocktails & Reception

The Mid-America Club
200 E Randolph Drive, 80th Floor
Chicago, Illinois 60601

http://www.digitalanalyticsassociation.org/symposium2013-chicago

New for 2011! Standardizing the Definition of View-through

It has been over 17 years since the advent of the Netscape Web browser in 1993 and almost as many years since the first AT & T banner ads were served on Hotwired.com. Back then, the Internet was heralded as the most accountable medium ever.

Fast-forward to 2010: the digital advertising industry has gone mainstream and will likely generate more than $25 Billion (US only). At the same time, a subject that concerns far too many people is declining or flat click-through rates. Last week’s gushing “news” from a rich media vendor that clickthrough rates have supposedly leveled off after years of decline is a good example. 

Definition of Insanity
In a business that obsesses about such meaningless metrics, the digital advertising industry simply cannot continue worrying about click-through rates. Although most will recognize that the novelty of clicking banner ads has largely worn off, this measure which still provides almost no insight on the effectiveness of most campaigns just won’t go away – regardless of marketing objectives.

In the no man’s land somewhere between the ad server and site tracking is an analytics oddity called viewthough: a useful albeit tortured metric. Testament to this is thats not one of the major industry trade groups recognizes viewthrough by including it in their standards glossaries:  IAB, OPA, ARF or the WAA.

Despite early work by DoubleClick, plenty of practioner interest and ongoing research by ComScore, the digital advertising industry still somehow lacks an official definition of a viewthrough. At times, it seems like we’re all too often measuring what is easy or expedient. And, clearly that is not working for the industry.

Here is a sampling of viewthrough articles over the last 10 years:

  • Lilypad White Paper Response Assessment in the Web Site Promotional Mix (2/1997) A very early attempt by the author to describe the phenomenon in the context of measuring ad response; see the diagram.Online Awareness Model of Banner Advertising Promotional Models. At that time there was not yet a way to measure such passive behavior.
  • Conversion Measurement: Definitions and Discussions (9/2003) An early article that focused on “people that ultimately convert but did not click”. Technically, viewthroughs are not people and are probably better described as visits. Also, depending on the campaign objective, a conversion event is optional.
  • Neglecting Non-Click Conversions (11/2003) A pretty thorough piece on the subject, although the term “viewthrough” is not used and there is again an emphasis on conversion.
  • Lies, Damn Lies and Viewthroughs (8/2005) Again, the focus is exclusively on viewthrough conversion, which is clearly a trend. However, it is misleading as it misses all the non-converter traffic.
  • The Most Measurable Medium? We Still Have A Lot To Do! (9/2007) David Smith actually made a literal plea for the industry trade groups to define viewthrough. A great idea, unfortunately it fell on deaf ears and several years later not much has changed.
  • Why view-through is a key component of campaign ROI (9/2010) provides a more balanced look at what viewthrough is but still brings up conversion. Also, th acronym “VTR” is confusing as that is what most people might consider a viewthrough rate similar to how a “CTR” means clickthrough rate.
  • Different Views of View-Through Tracking (10/2010) More of the same, although this article actually quotes Wikipedia (scary) and further convolutes the matter referencing a Google Display Network definition that focuses on viewthrough conversion. Consistent with the theme, the term VTR is used to mean viewthrough conversion rate not view-through rate – two very different measures. On the upside the potential of viewthrough for media planning and optimization was right on.

Curiously absent from the ongoing discussion is what viewthrough inherently represents: measurable incremental value from an affirmative self-directed post-exposure response. With just syndicated panels and qualitative market research to divine results, traditional electronic media could never quantify this .

At the same time, the advertising industry now has over 10 years of similar “directional” qualitative research focused on the familiar yet ephemeral measurements of post-exposure attitudes and intentions (notoriously unreliable). Many see these brand lift studies as rife with data collection challenges and ultimately of dubious value. Just this year, Professor Paul Lavrakas on behalf of the IAB released a critical assessment of the rampant practice.

Parsing The Metrics
It is bizarre that many digital marketers insist on defining viewthrough rates in conversion terms while clickthrough rates are always measured separate from subsequent conversion rates. Mixing metrics has confused the matter but effectively left viewthrough to be held to the higher standard of conversion. Ironic, since very few clickthrough (in volume and rate terms) even result in conversion.

While “clickthrough rate” is always understood to be relative to impressions (# of clicks / # of impressions), “viewthrough rate” seems to have skipped the middle response step and gone all the way to conversion. That doesn’t make sense when there are so many other factors that influence the purchase decision after arriving on a Web site.

To be very specific, viewthrough rate (VTR) should be similarly calculated, i.e. # of (logical) viewthroughs / # of impressions. “Logical” means that the viewthrough is observed where a branded post-exposure visit is most likely to happen analogous to the target landing page of a click-through;usually this means the brand.com home page. 

Measurement Details
The real problem underlying the apparent confusion is that viewthrough measurement invokes several raging and simultaneous, inter-related and often technical debates: branding vs. response, optimization, cookie-deletion, cookie-stuffing, panel recruitment bias, correlation vs. causation and last-click attribution. Anyone one of these arguments can cause a fight.

Nonetheless, in defining what viewthrough actually means it would be helpful to overview the two basic ways of measuring view-through:

  1. Cookie-based: This a browser-server technique that relies on cookie synchronization between the ad server and the target brand site. When the user receives the ad, a cookie is set on their browser that is later recognized upon visit to the target site, which is then matched via speical page tags back to the associated campaign. There are several ways this can be done, e.g.  DART for Agencies (DFA)/Atlas/Mediaplex page tags, ad server integrations (Omniture) or ad unit ridealong pixel tracking (Coremetrics). Optionally, PSA campaigns can be run alongside a camapign for a simultaneous test-control comparison of viewthrough “true” lift;essentially you can measure a baseline amount of viewthrough traffic that would end-up at the site anyway. Downside: subject to browser cookie limitations.
  2. Panel-based: Alternately, a standing Internet behavioral panel can be utilized, e.g. ComScore and Compete. In this approach, two comparable groups are observed: an exposed test group and an unexposed control group that represents the baseline viewthrough. The difference between the rate by which the test group (exposed to ad campaign) and the control group (received PSA or other’s ads) subsequently visit the target site reveals the lift that is explained by the presence of display advertising. This method may also include ad or page tracking, but does not require cookies. Downside: subject to panel bias.

The Impact of Time
Next, an additional layer to viewthrough measurement that is worth mentioning is time, i.e. delayed response. Like traditional advertising media, display ads exhibit an asynchronous response curve where the effect of the advertising decreases over time.In our real-time data collection world, it seems the common sense realities of human behavior are often overlooked.

Many factors can impact the viewthrough response curve, including messaging, frequency, share of voice and creative execution to start. And, one size does not fit all: a considered purchases could reasonably have longer shopping cycles than CPGs. Depending on the method of measuring view-through, typically 30 days or 4 weeks are often used as initial “lookback windows.”

Et Cui Bono?
Although that was fairly straightforward, as soon as viewthrough is connected to a site conversion (through deeper page tracking), the thorny issue of attribution arises (and cookie-based measurement is implied). Viewthrough measurement often goes off on a tangent t this point because there are two layers to attribution.

  • Channel attribution is simply put: which digital channel is assignedtr credit for the conversion event? Measuring display advertising happens to be more complex and most site metrics tools punted on tracking this capability. That means that simpler response channels like paid search, natural search, affiliates, CSEs and email to receive last credit as a default. For many marketers, measuring conversion attribution or participation gets complex and often political very quickly.
  • Media attribution gets really contentious, especially for lead generation and ecommerce-oriented marketers. Performance ad networks often insist on having their own special page tag in place where the conversion event occurs;in this way they can independently measure conversions and potentially optimize their ad delivery. The problem is that there usually are multiple ad network vendor tags on the conversion event page and all of them will count the page load as a conversion. Worse, this is an easy way for the ad network to shoehorn themselves a retargeting cookie pool. Unchallenged, media vendors may claim credit for everything such that marketers end up overpaying for the same conversion. Alternately, some very Byzantine schemes have arisen to guestimate credit. 

Despite all of the above, here is a working definition of a viewthrough for 2011:

Definition of View-through
Viewthrough is a measure of the passive but self-directed impact from a partiucular display ad unit (banner, rich media, video or audio). The viewthrough event follows one or more ad exposures and when the ad unit is clickable can be post-click (initial click visit timed-out) or post-impression (with no click). Importantly, a viewthrough may or may not be associated with a purchase conversion event but must be associated with a target page load or other high-value action. VTR or viewthrough rate is calculated as # of viewthrough / # of impressions.

Viewthroughs decay over time from ad exposure. In-flight viewthrough are observed during the live ad campaign while the post-flight “vapor-trail” begins immediately after the associated ad is served.

Don’t like this definition? Come up with a better one or edit the above…and, the sooner the better or the industry might get stuck with this sketchy Wikipedia entry.

Early take on Post-impression Viewthrough: Lilypad White Paper

An oldie but goodie:

White paper about Streams Lilypad analytics tool that I wrote back in 1996. Oddly, the industry has become search obsessed and has barely advanced with respect to post-impression and viewthrough tracking since then. DoubleClick for Agencies (DFA), Atlas and MediaPlex all measure viewthroughs; ComScore routinely provides research about this passive asynchronous behavior.

Reponse to Commenter Jaffer Ali’s "Driving In The Rear-View Mirror"

Jaffer Ali, the semi-retired CEO of EVTV1, and a jovial industry colleague is usually good for some creative commentary and periodic fire-starting.

Today’s piece “Driving In The Rear-View Mirror” published in Mediapost, required a response as I completely disagreed with it.

http://www.mediapost.com/publications/?fa=Articles.showArticle&art_aid=113117

This notion is nothing new. In fact, I know it has been on the radar since the Web 1.0 days; check out the original 1997 Lilypad white paper discussing time-shifted response-behavior and measurement:

http://www.seicheanalytics.com/consulting/lilypad.html#bapm

(See Figure #C.)

RSS Advertising Part II – The Wild West meets the Measurement Crater

Measurement…The Wild West

As prefaced in Part I, identifying and reaching an online audience can be a challenge – especially if you are after the evasive Techfluentials.
The reality is that right now, measurement is The Wild West of the online advertising industry. As marketers (gold-seekers) demand more and more accountability for their spend, software and media vendors continue the cycle of launch, failure and/or consolidation in a made scramble to sell the pick-axes to those after online gold.

By choosing to measure in-feed RSS advertising with oversold site metrics tools…it just make things even wilder. The reality is that there are serious technical limitations to consider when planning how to measure success at advertising to this target audience. Without proper planning however, marketers are left with a measurement gap of epic proportions.

First a quick primer to frame this classic situation. As a trained marketer, there are basically two different objectives of advertising and your ad creative typically can do one of the following well:

  1. Branding. In other words, making an impression and/or changing perceptions. Often very important but very difficult to accurately measure. For this reason, measuring branding is more complex and almost always requires either a custom-study or syndicated (shared, usually generic) sample-based research. Right now, ad effectiveness services like Dynamic Logic, InsightExpress, Nielsen and ComScore are not yet working with RSS feed networks. This means that qualitative audience research requires a custom study that is considerate of the RSS user’s environment – this may incur additional costs.
  2. Response. In the other corner is the quantitative measurement of tangible results such as sales, leads, impressions, clicks, time spent and the like. While it is much easier to technically measure the entire universe of such activities, in-feed RSS advertising success is a double-edge sword. Clicks (response) may be huge and the Clickthrough Rates great (relative to banners); with so much industry hand-wringing over declining CTR, clearly having a bounty of clicks is a good problem to have these days!

The Measurement Crater

For the purposes of this series, let’s say that you just want to measure response as the primary success metric for an RSS ad campaign. Unfortunately, if you or your client are depending on a JavaScript-tag based landing page tool to measure consumer response, you will likely experience something akin to this:


What happened? Wondering where did the clicks go? How many visits from suchnsuch.com’s RSS feed? Did they buy? Did they come back? Curious as to why you can’t determine what they did after they landed? As am I.

Newsflash: JavaScript tag-based Site Metrics have Limitations

Online marketers that are primarily interested in measuring response from an RSS campaign just found one. While many enterprise site metrics vendors brag about their simple, “just add our tag to your footer” implementation (Omniture, Google Analytics, Coremetrics)…if only it was that easy to get usable information.

The harsh technical reality is that JS tag-based systems require the browser to execute their special tag when the landing page is rendered. That is very different than server-side site metrics tools that track every access by definition. The main problem with relying exclusively on these tag-based approaches is that they cannot count accesses that originate from JS-disabled borwsers or altogether JS-incompatible applications. In other words, these popular site metrics tools essentially are blind to and ignore browsers and any traffic (including robots and spiders) that do not execute JS; I’m not going to get into the cookie deletion argument either.

Suffice to say with RSS advertising to Technfluentials, tracking non-JS accesses becomes your problem. To put it in marketing terms, here’s why:

  1. Techfluentials use standalone desktop RSS Feed Readers/Aggregators (non-browser applications).
  2. Techfluentials access the Web via mobile deivices in a browser environment that is even less likely to execute JS tags (my Treo 755p uses Palm Blazer 4.5, which offers the disable option).
  3. Techfluentials ALSO deliberately/religiously disable JS in their browsers (not to mention deleting cookies).

In other words, your most valuable segment is missing from the numbers. What to do about it?

To Be Continued…

RSS Advertising Part III – Solutions to this Mess

Why the Click Is the Wrong Metric for Online Ads

Story in AdAge by Abbey Klassen. Amazing that in 2009 this is still being written about….the comments even more telling. Here’s mine:

Wow. It’s great that this issue is being raised. However, this is nothing new. Same story since the Web 1.0 days as this could have been written over 10 years ago…actually, I did in the Lilypad white paper!

The twist today is that too many so-called online “marketers” either protesth too much (like the effect of branding but want to pay like performance) AND/OR opt for the easy way out and fallback on search – a FUD mentality. The former is solved by better media-side negotiating and the latter by training and education.

It is called branding and it’s about way more than measuring clicks.

If a Tree Falls in a Forest…

…and they don’t see your ad. Did it impact your brand?

Back at it in SF.

Which tool is being used for ad effectiveness research can help answer the age-old, “If a Tree Falls in a Forest” riddle that relates to ad effectiveness in online media. Such online research has come along way since studying it with Professor Stasch back at Loyola’s GSB.

A question from Cindy on the WAA forum that struck close to home:

“Can anyone suggest a product that is similar to Dynamic Logic. Dynamic
Logic is a good product, but there may be certain campaigns or
initiatives that would be better served by a similar vendor.
Any ideas are welcome.”

At GSF, we found that recruitment for media research is an ongoing challenge spanning study design, questionnaire development, funding, media partner alignment, statstical significance, recruitment technicalities and results preparation.

Especially thorny is achieving statistical significance when recruiting a target audience that is the least amenable to being surveyed. However, clients need this information often for internal financial modeling as well as the marketing value. Some findings:

  • After three quarters of Dynamic Logic (a Millward-Brown and therefore WPP company like GSF) and a pure-intercept basis; InsightExpress offers a similar approach; it was decided that we pass on more of the same.
  • ComScore offers a combined panel-based and intercept service leveraging similar control vs. exposure methods (BrandMetrix & CampaignMetrix). Sounds very promising.
  • Nielsen offers a solution in this vein also but has a somewhat smaller (but growing) panel; they did offer an interesting post-exposure email option that was novel.

ComScore shows alot of potential for a number of reasons: one interesting by-product of the panel-based approach is the potential to understand the context of what users were doing before and after being exposed to an advertising message – in addition to clickers. For example, what is the likelihood of a trademarked or category search or visiting the target brand’s Web site after being exposed? Turns out quite high.

For some background on what you can do with this (and handy industry benchmarks) be sure to check out their “Whither the Click” white paper from last December.

Good luck Cindy!

Study: VOD Brand Recall Impressive

Anthony Crupi writes about VOD effectiveness…kind of ties into the MTV post. Not sure he entirely gets it but the newly-acquired Nielsen Research’s IAG does:

“Viewers were 68% more likely to recall a spot seen in an on-demand context than they would an ad on linear TV…consumers were 83% more likely to identify the marque after being exposed.”

MTV Ahead of Pack Defeating DVR

New York Times covers “podbusting”…kind of like-hard-coding programming and advertising together.

Dario Spina of MTV says, “That’s the idea here; we want to blur the lines between the commercial breaks and the entertainment content.”

Considering this does make it harder to skip entire commercial breaks via DVR and the new Nielsen c-ratings for commercial viewership (as opposed to the legacy programming-focused approach) makes alot of sense!