We use cookies to personalize content, to provide social media features and to analyze our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. For information on how to change your cookie settings, please see our Privacy policy. Otherwise, if you agree to our use of cookies, please continue to use our website.

More isn't always Better

Occasionally, RKG is taken to task by the engines for not towing "the Party Line" with our clients. The Party Line is essentially:
"Don't get too caught up in measuring ROI in your paid search efforts. There are so many factors that make it impossible to measure accurately, like phone spillover, like channel cannibalization, like retail spillover, like cookie breakage, that you'll grossly underestimate the real value of search. The value to brand building of being in the top 3 spots on high traffic keywords is huge and while the benefits may be hard to trace, they're large. Retailers should just spend more."
To bolster these claims, the engines and some agencies who tow the line with them often cite studies suggesting the massive connection between online research and offline purchasing behavior. The In-Store Marketing Institute has published a study along these lines, and someone has used slides from Forrester on this subject at every conference I've been to in the last year or so -- usually the same slides -- citing something on the order of 6 offline orders driven by online research for every online order placed. But what does this shopping behavior mean for marketers? Obviously, for pure plays and catalog firms with few physical stores, not much, but even for brick and mortar chains: how should you adjust your advertising behavior given these data? Our advice is: very carefully. The problem with the research is it really doesn't answer the questions that matter regarding the influence of online ads, namely:
  • What fraction of the offline purchases went to the same company on whose website the research was done? Some fraction of these people likely already know where they're going to buy the product, they're just trying to figure out which model to buy, and any website that has good product comparison functionality will do. It doesn't warm the cockles of my heart to have my advertising dollars and web design helpfully guide people's purchases from my competitor's retail store.
  • Of the folks who do buy at one of your retail stores after visiting your website: how did they find the website? Did they simply load the site directly, or search for your site by name? If so, your advertising dollars aren't responsible for those offline orders, the power of your brand drove those orders.
  • How many of the folks doing "online research" just used the website to locate the nearest store, or print off a coupon?
  • Once you've discounted the extra orders driven by the above, what fraction of the remainder found your site through organic links rather than your paid advertising? No sense crediting advertisements if the "free" listings deserve the credit.
At last summer's Internet Retailer Conference I was impudent enough to get up and ask the speaker from Forrester some of these questions, and was quickly silenced with harrumphs to the effect that the data was: "directionally accurate." As a physics student I learned that vectors have two attributes: direction and magnitude. If you know the direction, but not the magnitude, you don't know much. We all know that online advertising influences some offline purchasing that is helpful to the advertiser, just as offline advertising drives tremendous amounts of online activity. What we don't know is how much. There is no question that the puzzle is complicated, with the publishers in each channel claiming credit for the orders driven in the other channels. We, at RKG, know that the publishers' interest and the advertiser's interests are different, and we line up on the side of our clients. We understand that the total marketing spend in all channels can't exceed a certain fraction of top line revenue in all channels, and that the key is to measure what you can, and test different mixtures carefully to achieve maximum ROI. As geo-targeting becomes more robust, testing a big push in limited geographies around retail hubs may help gauge the magnitude of that spillover. Testing a collection of geographies with a few weeks "on" a few weeks off, with half toggling opposite to help reduce seasonal effects might give a sense of what that spillover looks like. We know it's there, we believe it's material, but ultimately we believe in testing the water before diving in head first. We'd love to hear your thoughts on the subject! George
Join the Discussion