We use cookies to personalize content, to provide social media features and to analyze our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. For information on how to change your cookie settings, please see our Privacy policy. Otherwise, if you agree to our use of cookies, please continue to use our website.

Auction Dynamics: How Much Do You Matter to Google?

On my recent SEL post on Landing Page Quality Score, a commenter argued that Google won't really penalize the low quality content publishers because they spend too much money on AdWords. My initial reaction was: "I respectfully disagree." Not only does Google have an interest in protecting the quality of the user experience, each advertiser matters less to Google than we might assume. I've heard many frustrated advertisers over the years say something to the effect: "I Buy $X Million dollars in advertising from Google, they should [fill in the blank: 'take my calls', 'be at my beck and call', 'wash my car', etc.]" The thing is: while it may be true that you write Google and Bing checks for $4 Million a year, that doesn't actually mean they'd lose $4 Million in revenue if you decided to stop. Let's take a look some auctions to find out why. Build Your Own Auction Spreadsheets are a great tool for exploring different scenarios, and I'm particularly fond of the random number generation as a mechanism for testing. In Excel, rand() returns a random value between 0 and 1 out to 6 decimal places. RandBetween(top,bottom) returns an integer value between two ends of a specified spectrum. A little creativity allows you to create any conditions you want. Say for example you'd like to create a case where the bids from N competitors are all between 50 and 75 cents. An elegant solution is: Bid = randbetween(50,75)/100. Less elegant, but equally valid: Bid=0.5+rand()/4 Build a table of values for Bid and Quality Score using randomizing functions. You'll need at least 14 rows of data to get a clear view of a 12 bidder auction for reasons that will become apparent latter. Bid and Quality Score values allow you to calculate AdRank. AdRank = Bid * QS Taking a guess at the range of likely Quality Scores produces AdRanks for each competitor in the mock auction. Sadly, you'll need to copy and paste values at this point. In order to analyze the auction you need to sort this table in descending order of AdRank, and sorting doesn't work with active randomizing functions, because each time you make a change the random function generates a new value. If it did that on the fly but before the sort it would be fine, but it does it backwards so the results come back unsorted (Anyone at Microsoft reading this? A little help, please?) Sorted by AdRank descending, you're now ready to calculate the actual CPCs paid by the advertiser in each slot. For position 1, the actual CPC paid is the AdRank of the ad in position 2 divided by the Quality score of the position 1 ad, algebraically: See Auction Dynamics Live Toy if you want to play with one. When an advertiser drops out of the auction, the CPCs paid by the position above the departing advertiser's and all the positions below changes, usually declining. (For reasons detailed in the 'Caveats' section below, Google's revenue per impression almost always falls for each impacted position, even though the CPCs frequently rise.) Next we need to calculate how much the Actual CPC would be for that slot in the auction if that advertiser pulled out of the auction. Essentially, removing any individual advertiser from the auction impacts Google's AdWords revenue in two respects:
  1. The CPC paid by the advertiser in the slot above on the page pays a different (generally lower) CPC, and
  2. Each advertiser below the vacated slot moves up one position on that page, changing (usually lowering) the CPC collected from clicks on each of those slots.
For example, when bidder number 5 departs the auction -- all else remaining constant -- The actual CPC paid by position 4 changes (usually downward), and Google collects (almost invariably) less revenue per impression from each slot below position 4 on the page. So How Much Does Google Actually Lose? It's devilishly difficult to think through, and while this analysis is assuredly wrong-headed, I'm hoping it's at least directionally valid. Because the ads at the top of the page generate higher CTR because of their position, Google's revenue is heavily weighted towards the top of the page. Because the degree of variance in CTR is itself terrifically variable, figuring out the revenue loss for an advertiser dropping out of position 5 is much harder than figuring out what happens in the "worst" case scenario for Google: The position 1 bidder leaves the auction. I decided to focus on the worst case scenario. My first observation was: If the auction is densely packed -- that is, the AdRanks are very close together -- percentage loss per click Google experiences is surprisingly small. At first I thought: "probably true of super-competitive keywords, but not others." In fact, playing around with densely packed auctions then very loosely packed auctions turned out to be fascinating. First I ran a bunch of 14-player auctions with very closely packed QS and bids. Sure enough, the CPC percentage lost at each impacted position averaged 2%. Then I ran some more 14-player auctions with QS ranging randomly from 3 - 10 and bids ranging from $1 to $10. This resulted in a per position median loss of around 14% on average -- big difference! Bearing in mind that we're talking about losing between 2 and 14% of the revenue per click across the whole page, even at the high end of 14%, I suspect the amount paid by the position 1 bidder per impression is much greater than 14% of the whole page, hence the departing position 1 advertiser saves a great deal more money than Google loses because everyone else spends more. My second thought was: ALL the auctions are probably densely packed. The thing is, when we go from 14-player auctions to auctions with hundreds of players, regardless of whether the bid and QS ranges are large or small, you end up with 10 - 12 very tightly bunched advertisers winning the first page visibility. This is demonstrably true in this toy model. It may be the case in many auctions that there are a few companies who are willing and able -- profitably or unprofitably -- to bid into the stratosphere for top positioning, while most others set bids at significantly lower levels. In those cases the gap between the big dogs and everyone else creates an interesting dynamic as well, with the last placed high-ticket bidder being significantly more important to the engines than the others. The problem with the densely-packed auction theory is that if AdRanks really are tightly bunched, a relatively small change in bid should result in a potentially very large change in position and/or auction participation, and I'm not sure that jibes with our experience. As we described last year, in many respects there are two Google-created robot bidders in each auction. One takes the form of a minimum AdRank to appear at all -- a minimum threshold -- and the other takes the form of a minimum AdRank to appear in 'promoted positions' above the organic listings. Google can set these threshold values any way it wants to, creating an opportunity to essentially bleed the last advertiser above the threshold to extract the maximum CPC. I don't know that they do this, but they certainly could. I left the robot bidders out of this model because I don't know how they play...and this already hurt my head as it was. Results of Playing with Simulated Auctions I ran a number of simulations for each to see how much variance was produced by random noise and the answer seemed to be not much. I grabbed a representative auction for each of three cases. Case 1: Loosely packed, 14-player auction. Bids ranged from $1 to $10 and QS from 3 to 10. We expect this situation to maximize the importance of an individual player, and it does. Case 2: Tightly packed, 14-player auction. Bids range from $1 to $2 and QS from 5 to 8. We expect much smaller impact from position 1 departing, and that's what we see. Case 3: Loosely packed, 100-player auction. Bids ranged from $1 to $10 and QS from 3 to 10, but since only the top 12 AdRanks get to play, the results are similar to the tight pack model. Caveats: Almost too many to list.
  • There are so many phony assumptions here it's scary, but the issue of CTR is paramount among them. Different Quality Scores mean different CTRs for two different ads in the same position; on top of that, there is a huge positional dependence for CTR (particularly when we throw promoted placement above the organic listings into the picture. The right metric for us to look at is how does the revenue per impression change for Google when the top bidder leaves the auction and that's hard to model without heaping assumptions on top of assumptions. One has to assume, given the primacy of CTR in determining QS, that with very few exceptions, Google's revenue per impression normalized for position lines up very closely with AdRank.
  • We're looking at a single auction in isolation, when in fact there is also the question of qualifying for more or fewer auctions based on AdRank, the specific query, geography, personalization due to past behavior, etc.
  • randomized AdRanks may be a really poor proxy for what actually happens in the wild. If I was a betting man, I'd wager there is more variance between bids than quality scores, which impacts the math a bit as well.
Conclusions With those caveats understood, it seems clear that how tightly packed the auctions are matters a great deal to Google's revenue stream; but that even in a very loosely packed small auction, it's likely that even the top advertiser is actually worth substantially less to Google than what they pay directly. Google might lose 5 - 10% of the revenue per impression, but I have to believe the top advertiser pays directly a good bit more than that fraction. It would be different if everyone had fixed budgets for search. If every advertiser committed to spending some fixed amount each month and spent it regardless of ROI considerations (bidding more if they had to to spend X), then this whole analysis goes out the window. However, folks with easily measurable and quickly realized return on investment generally don't set budgets for search, and those who do probably shouldn't. Those who bid rationally will spend more when the marketplace becomes more favorable (higher position, more traffic, same ROI), and if there are enough of those in every auction then the analysis above may hold some water. George PS: I owe everyone an apology for this post on several grounds: 1) it's really long; 2) it's confusing and complicated; and 3) it's probably totally off base. However, I had a BLAST thinking through it :-)
Join the Discussion