We use cookies to personalize content, to provide social media features and to analyze our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. For information on how to change your cookie settings, please see our Privacy policy. Otherwise, if you agree to our use of cookies, please continue to use our website.

Averages Lie: Part 27

Okay, it probably isn't actually part 27, but it is a recurring theme at RKG Blog to point out that good data analysis is the foundation of good marketing and bad data analysis is a waste of time. Thanks to Tim Peter for inspiring this latest edition. One source of the problem is fixating on the wrong "KPIs". For example, many folks seem to spend time worrying about CPCs and whether they're spending more or less per click on average day to day. This is wrong-headed on several levels, but let's just dive into one. For simplicity sake, let's reduce the world to two keywords: wagawaga and flimflam. The data from Day 1 is below: Notice that wagawaga drives much more traffic than flimflam, but flimflam has the much higher CPC. Now let's assume the bid management is rational and that the reason Acme is willing to pay $5.00 per click on "flimflam" is that the traffic is actually worth that much to them. If we look at this simple, two keyword scenario in aggregate you see that Acme's paid search program as a whole drove 1,100 clicks for $650 for an avg CPC of $0.59. Now let's say the smart paid search marketer knows that on Day 2 the value of traffic on the term wagawaga is going to be higher than normal because of a promotion, or because it's the traditional kickoff day of wagawaga season, or whatever. Anticipating that the value of traffic will be ~25% higher than normal, she bids the keyword wagawaga up by 25% to capture a larger share of traffic and maintain advertising efficiency while doing so. The results from Day 2 are below: The Director of Marketing storms into the paid search manager's office
"Why, on a day when we were supposed to be pushing harder, did our average CPC actually fall?!? Heads must roll!"
To add to the picture, let's extend this to some other KPIs that aren't: This adds fuel to the Director's fire: "CPCs are down, our conversion rate and average order size dropped...you've got some serious explaining to do!!!" Thankfully, in this simple two keyword campaign, the explanation is easy to see. We did in fact push harder on wagawaga, sales and costs increased appropriately, everything in fact went great on Day 2. And, not only is it easy to see, it takes no time to pull the information. Fast forward to the program with hundreds of thousands of keywords and the answers to questions about fluctuations in conversion rates, CPC, AOV, CTR, Avg Position, etc. become much more difficult to see and take hours upon hours to isolate. Moreover we don't really care what happens to any of these KPIs, because, in fact they are not key performance indicators. To my thinking a Key Performance Indicator is something I need to care about for its own sake. Sales volume, the number of quality leads, expenses...these matter because they impact the P & L statement directly. Conversion Rates, CTR, AOV, and many many other metrics like page views, time on site, unique visitors, etc. are useful diagnostic measures that can help us identify problems and opportunities, but we don't care about them for their own sake, or at least we shouldn't. Success in almost any venture demands making the best use of finite resources, and time is one of those finite resources. Responding to false alarms is one of the great sources of inefficiency and distraction in an organization. Some of this can be eliminated by keeping focused on the true KPIs that impact the CFO's world. If those numbers are in line with expectations and look reasonable, fluctuations in second-order statistics can be investigated or ignored depending on other priority tasks. RKG's platform includes all kinds of warning flags to alert our analysts to anomalous behavior, but these warnings are calibrated to recognize some level of normal statistical variance and only throw flags when the variance is statistically meaningful. Too many 'false alarms' means the system isn't tuned properly. The more time spent engaged in activities that drive the numbers and the less time spent explaining variances in secondary indicators the better the program will be over the long term.
Join the Discussion