My latest post from Search Engine Land, if you missed it there.
The time delay between marketing exposure and marketing success creates tremendous opportunity for consternation for all paid search managers, but particularly for enterprise programs. Let's look at 3 ways that time can distort one's perspective, and consider a solution that can be helpful. In most paid search reporting platforms the default setting -- often the only setting -- creates a disconnect between conversion events and the marketing touches that drove them. Impressions, clicks, costs are tied to the day on which they occurred. Conversion events are tied to the day on which they occurred. But interested customers don't always convert on the first visit, or even the first day after that visit, which means some fraction of the conversions on any given day was driven by marketing touches that occurred on earlier days. Day-Parting
RKG has argued for years that an important element of correct day-parting
calculation is to tie the conversion events to the time of the click, not the time of the conversion. Day-parting allows sophisticated advertisers to bid more for higher quality traffic and avoid overpaying for lower quality traffic by measuring the impact the day of week and time of day have on traffic value. This can only be done correctly by associating the conversion with the correct click-through. Since you bid for clicks, the right way to think about this is: of the clicks taking between 9AM and 10AM what fraction converted? Multi-touch interactions within paid search and across channels add a layer of complexity, but rarely alter the conclusions of a carefully done analysis as those effects are generally small and normally distributed. Creating time zone targeted campaigns may or may not be worth the additional management costs, but data should drive that decision. The dissociated view -- how many clicks happen in between 9AM and 10AM and how many orders happened between 9AM and 10AM -- creates a somewhat different picture. Here’s an example of the conversion rate by hour measured three different ways: last touch, first touch and the dissociated view.
Adding Up/Down Bars highlights areas in which the dissociated view would lead to materially underbidding (white bars) and overbidding (black bars) Difficulty reading tests and new launches
The lag effect can also make it difficult to read the results of new campaign launches. Let’s say for a given advertiser in financial services that half of the conversions happen within 24 hours of the click and that the overall 21 day distribution looks like this:
Further, let’s say that the advertiser is willing to spend $50 to attract a qualified lead, and let's assume that the brilliant Paid Search manager has this program dialed into the target efficiency from day 1. Even with this perfectly optimized launch the program will appear to be significantly underwater for the entire cookie window simply because of the lag between click and conversion.
The actual CPL is $50 each day, but it doesn't appear so
Indeed the dissociated view (tying conversions to the time of conversion) only begins to show the real ROI of the new campaign after 21 days. That’s fine, as long as the advertiser is aware of the lag and doesn’t react too quickly to the apparent under performance. Difficulty dealing with major events.
More common in ecommerce than other verticals: a big event, whether promotional or seasonal, often changes the value
of traffic, not just the quantity of it. Absent accurate, detailed, historical performance data, intra-day bidding reactions can be tricky because we can’t see the “all-in” conversion rate of the traffic in real time. Solution...er...A Useful Analytic Approach to a Solution
An excellent “hack” solution to this is to understand what normal conversion rates appear to be over shorter windows of time, like a day, or even an hour. Determining what fraction of eventual conversions takes place in the first hour (or on the first visit), allows you to take a pretty good guess at the “eventual" conversion rate. The thinking is that if an event is expected to create a change in traffic value, and the "one-hour" conversion rate is measured to be X% higher than the normal rate, we can assume that the conversion rate over the full attribution window will also be ~X% higher. Essentially what we're doing is assuming that the shape of the conversion curve over time will be the same as it has been historically, and extrapolating early performance to project the eventual performance. This same technique can be useful in estimating lead valuations and establishing LTV calculations. In long sales-cycle B2B and B2C businesses, it may take a year to get a clear picture of the average lead value from a given pool of leads. Similarly, many advertisers are willing to take a loss to acquire customers based on the promise of lifetime value. Advertisers might lose money to acquire the customer even after the first "sale" because they believe they will recoup that loss and make profit off of future business from the same customer. Marketers look at lifetime value metrics historically to gauge how much they can and should be willing to lose to attract a new customer. But how do they know that the one-year and two-year value of customers historically will be predictive of how these new customers from new sources will behave? How do we know that those new sales leads will convert over the long haul at the same rate as others we've received through different channels? Well, we don't. But what we can do to get a reasonably good sense of the matter is look at the typical 1-month conversion rate of leads, and if the new leads seem to show a similar conversion rate after the first month since capture then it isn't crazy to assume that they will turn out to be of similar quality in the long run. If the two-year customer value of a new customer is typically $200, It may be that $40 of that typically comes in the first month after the new customer came on board. So, with the new channel, we can't see the whole two year value for...um...two years, but if the one-month value is ~$40 we might be reasonably confident that they are customers of equal value to the historic trends. This isn't an exact science. The nature of the event can change the click to conversion pattern, too, perhaps encouraging a larger fraction of the eventual buyers to "act now". It could be that a one-hour conversion rate increase of X% may lead to an eventual conversion rate increase of something less than X%. Historical data can teach us what types of events may shift the curve by how much, and what types do not impact the click to conversion pattern materially. Similarly, new leads may convert at a different rate than normal and you won't know for sure until much later. However, guessing that the historical patterns will hold up is almost always a reasonable starting point, and ignoring the challenge posed by lag time can lead to disaster.