4 Keys for Maximizing the Long Tail

We all know the schpeal about "the long tail" of search: low-cost clicks, high conversion rates, and, while the volume of traffic on each term is low, if you glue enough of those low traffic terms together the impact is material. The engines hate long tails. I recently heard a Googler say that anyone who mentions the long-tail at Google has to put a dollar in a jar. That's understandable, those tails eat up a huge proportion of Google's computing power for a relatively small amount of revenue. It's one of the reasons for the ever expanding breadth of broad match. "You folks don't worry about keywords, we'll just take a peek at your site and serve your ad whenever we think it's appropriate..." Savvy marketers know better: segmentation is the key to success in cataloging, emailing and search as well. The more granularly you can measure and manage the ROI of these segments, the better the performance of the program. But how do you know if your tail is long enough or whether you're getting as much out of it as you could? We thought sharing some rules of thumb and benchmarks might be helpful. One general rule is that you should have 5 to 10 keywords per sku on your site. If the skus are all in a few narrow categories that multiplier may be much lower: you might carry 100,000 different nuts and bolts, but find that 50,000 keywords gives you comprehensive coverage for the way people search for your products. You may carry only 500 one-of-a-kind items, and need 100,000 keywords to cover all the ways people might describe them. The number by itself doesn't guarantee quality. You can easily quadruple your term list by adding "shop for...", "buy...", "...online", "...store" prefixes/suffixes to the existing list without adding any meaningful variation. It's useful to have a smart human scan the full list periodically to look for holes, missing synonyms, etc. The real measure of the tail is how well it performs. Does the tail drive meaningful volume? Are the tail and the head equally efficient or close to it? Making the tail work for you is a product of having:
  1. Quality terms, not just quantity
  2. Smart classification schemes. The two-tiered hierarchy of campaign and adgroup, is insufficient to this task. We'll talk more about that in a subsequent post.
  3. Smart bidding algorithms, that can handle low traffic keywords correctly
  4. Wise use of match-types, to prevent the engines from simply serving your high traffic terms on every search
At RKG our median term list is about 43,000 keywords. Some clients have only 1,200 or so, others closer to a million. In order to analyze the performance of the tail, we divided each clients' list into buckets based on traffic volume over a two month period. We then took a look at what fraction of keywords each bucket represented, and how each bucket performed in terms of percentage of total sales and costs. The median values for our clients are below -- medians, Jay :-) Performance of PPC Tail First, notice that for our average client only about 6% of the keywords even get an impression in two months (more on this later). Of "Active terms" -- those keywords that received an impression -- 85% had fewer than 50 clicks. The chart shows median values, so it would be technically incorrect to sum the medians and say: for the average client more than 25% of their sales come from terms receiving fewer than 300 clicks in two months -- five clicks a day! However, looking at trimmed means rather than medians suggests this is about right. When you study your own tail, check to see whether you're getting this much production from your low traffic terms. Make sure you exclude brand phrases as they will make your tail look thicker than it is. Also, check to see that the cost to sales ratio of the tail is in line with the head. Branding objectives can skew this data: some of our clients insist that ads appear on the first page regardless of economics, so those low traffic terms might be fairly inefficient. You might also find terms in first position that are "too" efficient, but have nowhere else to go. The fact that some buckets are a bit more efficient than others does not always mean there's a problem, but it definitely warrants an examination and explanation. Are we TOO obsessed with term lists? If 94% of keywords don't get an impression in two months, why not get rid of them? The answer is simple: because over the next two months you'll see 94% of the keywords are inactive again, but they won't be the same keywords. We pulled data from the same time period the previous year to find out what fraction of keywords in each bucket were also active last year. The results prove the point. Fraction We were a bit surprised that only 91% of the highest traffic terms saw impressions the previous year, but we think this reflects new hot products, new product categories for some of our clients, and ongoing term creation by our analysts. What's truly fascinating is that only 12% of the lowest traffic terms even got an impression the previous year. It just goes to show that the terms that don't seem to be doing anything are next month's long tail gems.
Join the Discussion