We use cookies to personalize content, to provide social media features and to analyze our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. For information on how to change your cookie settings, please see our Privacy policy. Otherwise, if you agree to our use of cookies, please continue to use our website.

The Simple Truth About Marketing Measurement

In analytics, measurement is everything.  Without the ability to benchmark and compare, we cannot make rational decisions about how to improve the outcomes we desire to affect.  However, if you think about it, there are really only two ways to measure something – you can observe or you can predict. 

Measuring through observation is based on the simple premise that we should test everything and, through trial and error, learn and improve.  It is things like A|B testing or a more sophisticated design of experiments.  An experimental test is simply a controlled method of observing and explaining an outcome or event.  When it is not controlled, we call it an accident, but as the saying goes, “always learn from your mistakes.”  ☺

Well, what really is a test?  Effectively, in its simplest form, we are creating two things – a baseline and a treatment (something that we change). The premise of a test is that there is a fundamental comparison of (at least) two things – one of which is the baseline or benchmark that provides context and meaning. For example, if you take the SAT get a score of 1000 but you are not given a benchmark, how do you know how you did? But if given the minimum and maximum scores and a national average, you benchmark yourself for understanding and context.

In terms of marketing, the most traditional method of testing is holdout or A|B testing.  When looking at marketing performance, my typical recommendation for best practices is to always have a randomized control cell as a benchmark. To measure sales or marketing performance, we attribute sales to promotional activity, like email and display, using direct or indirect means.  “Direct” simply means through a cookie, promotion code, 800 number or something that directly ties the promotion and the purchase. With indirect, we assume a connection by creating a response window after the marketing event and looking for purchases during that time period. But we really need to have a holdout or blackout control to properly use indirect attribution. Both of these attribution methods enable measurement through testing and observation. However, what happens when I cannot make the link at an individual level and figure out who bought what and why? 

This is where the second method of measurement comes in – modeling. Many times in life, the problem with testing isn’t changing something. It is our inability to NOT change something, i.e., it is the inability to create a benchmark or control. This is where predictive modeling can help bridge the gap. Here, we are using the model’s predictive power to create a benchmark or estimate of what should have been the baseline. The most standard marketing application of this idea is marketing mix modeling. Marketing mix really came out of the CPG space where manufacturers needed a way to measure the return of their marketing investments. Without the ability to understand the actual sales of their products (as they are sold by third-party retailers), CPG marketers turned to mix modeling to create estimates of performance.  

Well, despite the fact that there are two ways to measure marketing performance, the right approach is not one or the other; but it’s really both, and how we integrate the best of the two methodologies. In marketing, proper measurement should be based on a top-down modeling based approach which provides directional information about the performance of major media vehicles. This should be integrated with a granular testing based, bottoms-up approach that looks at individual-level data to compare specifics. Although fundamentally simple, this is wildly complex in practice. But the truth remains that one makes the other better and vice versa. Using modeling to measure without validation through testing is simply flawed.  Furthermore, more testing creates more variation in the data and makes the modeling function more powerful. Therefore, there is a synergistic relationship between the two and we should embrace both properly and understand when to use each and how best to integrate the two. 

As a last point, although the focus here was on marketing measurement, I believe the principle holds true of all measurement: that we can observe it through testing and we can predict it through modeling and any self-righteous analyst (like yours truly) will tell you the using both is always the right answer. 

Join the Discussion