We use cookies. You have options. Cookies help us keep the site running smoothly and inform some of our advertising, but if you’d like to make adjustments, you can visit our Cookie Notice page for more information.
We’d like to use cookies on your device. Cookies help us keep the site running smoothly and inform some of our advertising, but how we use them is entirely up to you. Accept our recommended settings or customise them to your wishes.

Paid Search Performance by Closeness of Keyword to Query Match

A month after they originally announced the change, Google is now likely to officially launch an update to AdWords' matching behavior for exact and phrase match keywords early next week.  By default, both match types will become misnomers as queries that are "close variants" will be able to trigger our keywords to display. Under the umbrella of close variants, Google includes: misspellings, singular/plural forms, stemmings, accents and abbreviation. 

Nothing too crazy there, but the response to this change among sophisticated search marketers has largely been negative.  At RKG, we have already taken action to opt our clients' campaigns out of this behavior when the change is made.  Thankfully, Google has given us that option:

Why such a response to a seemingly minor change?  George touched on some of the issues we have with near matches last month, but I want to focus on one in particular:

How Closely a Keyword Matches the Query Matters a Great Deal

This isn't news to anyone following our blog over the years.   Five years ago we highlighted the performance differences between broad and exact match traffic and recommended that advertisers segment and bid the two accordingly.  That's advice we had given before and have given innumerable times since. But Google is just talking about very close variants, those can't matter all that much, right?  Let's take a look:

While RKG has been a part of the close variant testing ahead of its launch, it is a small sample for us.  The results above are from a larger group of RKG clients and it offers a view into the precise keyword to query matches being made by Google in April 2012 and not necessarily the match type setting of the keywords themselves (a keyword set to broad match may be matched to a query exactly, etc.)

The segments above are mutually exclusive, as I've defined them, and exclude branded keywords.  Let's take a look at each:

Exact Match

The gold standard.  When a keyword matches the search query exactly, it generates a significantly higher sales-per-click than average.  For this sample, exactly matched keywords had a 27% higher SPC than the overall average.

When keyword and query are perfectly aligned, we have a much better grasp of the user's intent -- we added the keyword, after all -- and we have tailored our copy and chosen our landing pages specifically for that search phrase.  Exact matches account for around 25-30% of all traffic and serve here as the barometer for the performance of the other segments.

Plural Keyword to Singular Query and Vice Versa

Whether the keyword and search query share the same plurality sounds trivial, but the cases where they don't match have the greatest disparity from exact match performance of any of the segments above.  Plurality mismatches have a 40-50% lower sales per click than exact matches.  This is not a group we want Google to treat as equivalent to an exact match.

Why do these mismatches perform so poorly?  Generating both the singular and plural form of each keyword we want to run is one of the easiest and most obvious steps in building out a term list.  If we do not have one form or the other in the account -- leading to a non-exact match -- chances are there is a good reason.  Either we added it and it performed poorly, or our experience suggested adding it was a bad idea to begin with.  This happens when one form -- usually the singular -- clearly has a far lower commercial intent.

If this is the case, shouldn't we be blocking any unwanted forms entirely with negatives?  Generally, yes, and the traffic levels we see here are miniscule as a result: less than 0.3% of click traffic on average.

Near Misspellings

It is beyond my abilities to reverse engineer the algorithm Google is using to determine which misspellings are close variants of the exact or phrase match terms we're running, but I adopted a proxy that I believe is reasonable.  The Soundex algorithm, which is, conveniently, a built-in SQL function, determines if two strings are equal after retaining the first letter, removing all subsequent vowels and deduping repeated consonants, among a few other steps.

The result is that my definition of a near misspelling is probably a little broader than Google's, but both groups are dominated by an incorrect vowel here or there and small punctuation mismatches. Near misspellings have a SPC that is about 10% lower than exact matched keywords -- much better than the plurality mismatch group. 

That's likely because it is far more difficult to generate, and unwieldy to maintain, keywords for all remotely likely misspelling possibilities. In other words, if we do not have these keywords in our account, it's probably not for cause like with plurals/singulars.  Near misspellings account for about 3% of traffic.

Other Phrase and Broad

What remains of our data set falls into the phrase (the keyword is found in the search query in its entirety) and broad (everything else) categories.  Not surprisingly, phrase matches, which are closer than broad matches, have a higher average sales per click than broad.  Phrase SPC is about 20% lower than exact, while broad SPC is 30% lower.  Phrase match accounts for about 10% of traffic in this sample, while broad accounts for about 60%.

A Matter of Principal and Practicality

Google claims that testers adopting the new matching behavior increased search clicks by 3%.  Our own testing experience suggests that figure is probably high, but not unreasonable for some.  It's a small number to get worked up about either way, but advertisers with well-built programs should still opt out of the change as their default tactic. Segmenting traffic more and more finely, while still accurately predicting its value is the name of the game for paid search bidding. 

Why would we take a step in the opposite direction?  Even the best performing close variant segments will dilute the performance of our existing exact match keywords and we'd also be paying between 10-50% more for those close variant queries than they're worth. Advertisers will be better served by continuing to use broad match, normal phrase match, and broad match modifiers -- which already trigger for close variants -- to capture this query segment at a lower price. 

We also need to remain vigilant about scouring our query logs to discover new keywords to add, as well as new negatives.  And while segmentation is a core element of paid search, so is testing.  We shouldn't completely slam the door to this change, just as we shouldn't blithely accept it.

Join the Discussion