We use cookies. You have options. Cookies help us keep the site running smoothly and inform some of our advertising, but if you’d like to make adjustments, you can visit our Cookie Notice page for more information.
We’d like to use cookies on your device. Cookies help us keep the site running smoothly and inform some of our advertising, but how we use them is entirely up to you. Accept our recommended settings or customise them to your wishes.
×

Scenario Planner – Cookieless Cross-Channel Measurement and Forecasting

Industry Challenges

Privacy initiatives in the last few years such as GDPR and ITP/ETP features from web browsers; and walled gardens from the biggest players in martech, has made traditional, user journey-based attribution incomplete. These changes led to very little user-level impression data available and even less ability to join this data with other user interactions in order to create complete user journeys. Without complete journeys, regardless of the sophistication of your attribution models, impression-based channels such as Display and Social will likely be undervalued.

Attribution model missing display and social impressions

What Can Scenario Planner Do?

Scenario Planner can provide cross-channel attributed conversions and the ability to forecast KPIs under different budget allocation scenarios. We can also surface insights such as a channel’s diminishing returns curve or its lagged effect. Scenario Planner can also be automated so that you’ll always have up to date insights. We visualise these results in our GCP web app that clients can get access to.

Scenario Planner interface 1Scenario Planner interface 2Scenario Planner interface 3

What Does Correlation-Based Mean?

All media measurement methods try to understand the impact of media on total KPIs. For example, data driven attribution models do this by comparing the conversion rates of journeys with or without a particular channel to determine its value.

Scenario Planner does this by looking for correlations between media spends for various channels and the KPI. If the KPI increases every time a channel is increased and decreases every time the channel is decreased, then the channel has a positive correlation with the KPI. Different channels will have varying correlation strengths with the KPI. The channels with the strongest correlations are the ones with the biggest impact on the bottom line, hence these will receive the most attributed conversions.

To calculate these correlations, we use two machine learning packages (Fbprohet and XGBoost) in a proprietary combination. Understanding the relationship between each channel and the KPI allows us to not only calculate attributed conversions but also to forecast the KPI under different budget allocation scenarios. This can help brands optimising their media split.

What goes into the model?

What Goes into the Model?

As a bare minimum we would need daily spend from your digital channels and the KPI by days for the last two years. We have had accurate forecasts and attribution with just the three common digital channels (Paid Search, Display, Social) for ecommerce clients, but the more marketing activity we can include the better. So, if we can also get daily spend, impressions or reach on channels such as TV, OOH, Print, Affiliates, etc. the model has a better chance of understanding what drives the KPI.

We also include external factors such as seasonality, promotions, stock availability, product pricing, etc. We try to uncover as many of these factors as possible for each client during the discovery process as without them we may be overestimating the impact of media.

We also carry out some transformations to the media variables because we expect them to have different lagged effects depending on where in the funnel the channel is, different diminishing returns curves and different halo effects on other channels. I.e. we calculate for how many days a channel is having an impact for after viewing it; at what level of spend does the channel become inefficient; and how do different channels impact one and other. Calculating these parameters not only improves the model accuracy, but they can also be surfaced as insights for optimising a channel’s targeting and messaging.

Channel effects and lags

Is Scenario Planner an Econometric Model?

They are similar but there are some key differences. The biggest one is Scenario Planner’s focus on providing accurate forecasts as most Econometric models focus on backwards-looking channel attribution only. Since the main aim of Scenario Planner is to forecast, we tend to only include external factors that we can forecast themselves or have a way of predicting future values of. For example, we would include promotions because most clients have a promotion calendar and they know a year in advance when they’ll have sales or discounts and how big they will be. But something like competitor pricing, which a backwards-looking Econometric model would include, we wouldn’t because we have no way of knowing what competitors will do to their pricing in the future.

We also focus on providing granular insights for digital channels so we designed Scenario Planner so that it can handle relatively small channel groups as well. So, unlike most Econometric models that bucket all digital into a single channel or maybe split them into the main mediums such as Paid Search, Display and Social; Scenario Planner can split them into more granular groupings such as Paid Search Brand, Paid Search Generic, Display Prospecting, Display Remarketing, Social Prospecting, Social Remarketing, etc. We can typically have channels as small as 2-5% of total media spend.

The statistical methodology we use is different from most Econometric models as well. Econometric models tend to use variations of Regression modelling that have been used for decades for media measurement, whereas we use Gradient Boosting, which is a machine learning technique that requires more computational power that only became widely accessible in the last few years.

Gradient Boosting handles cross-correlation between media spends better hence it is quicker to select the variables for the model, accelerating the whole process.

How Do We Know if the Model is Accurate?

There are three ways we validate our models. The first one is during the modelling process itself. We withhold 2-3 months of data (validation set) from the model and show it the rest (test set). Using the test set, various models are created. The model that could predict the validation set the best, without seeing it, is the model that gets chosen as the winner.

The second way of validating the model is simply creating a forecast for the next month, wait a month then see if our forecasts were accurate or not. We aim for model accuracies above 80% to make sound budget allocation decisions. Some of our best models had 95-97% accuracies. Unfortunately, there is no way of knowing whether we can achieve this accuracy for a brand before we start modelling, hence we usually do a proof-of-concept model for clients before we productionise and automate the process.

The third way validates the attributed conversions by carrying out either conversion uplift tests or geo incrementality tests. No measurement solution can truly replace testing as it is the most accurate way to measure a channel’s incremental impact. But tests can’t be carried out concurrently for multiple channels and each test can take 4-6 weeks so they’re not great for on-going measurement. But they are great to validate measurement methods and to calibrate them if we see any differences between the test and Scenario Planner results.

Get in Touch

If you think cross-channel measurement and forecasting is what your organisation needs, don’t hesitate to get in touch. We’re happy to take a look at your media and tech set up to see how we Scenario Planner could be customised for you.

Join the Discussion