Analytics couldn’t be hotter. IBM Watson ads are all over the place. Data science is the new hot job. Silicon Valley is flooded with engineers. However, is it possible that the Big Data fad has killed analytic curiosity? I believe, in some sense, the rise of Big Data has actually diminished the quality of analytic rigor and depth within quantitative marketing applications. DMPs, DSPs, publishers, optimization platforms, ad exchanges – all of these have introduced the use of analytics to improve digital targeting and personalization. Because of this, marketers without a quantitative background have been able to fully embrace the use of analytics through the “technologization” of analytics (yes, I just made up a word, but hey, I’m in marketing). By it, I mean analytics has somehow shifted away from a person actually analyzing data to becoming automated through the integration with technology.
This transformation has resulted in the widespread application of analytics and data into marketing. If you read AdExchanger these days, you’d be hard pressed to find an article that doesn’t talk about the use of data within marketing. However, is it possible that we have devalued the quality of analytics over the quantity of its application? From my vantage point, I actually see this happening.Twenty years ago, the heaviest application of analytics within marketing was in direct marketing. An example of this was the behemoth engines created by the credit card companies in mining credit bureau data to target new card applicants via direct mail. Companies like Capital One would have buildings full of statisticians working on the best possible predictive model to maximize the 9-figure media budgets spent on printing and postage of mail. Given the cost, there was a massive premium placed on the quality of the models.
Today, predictive modeling is healthier than ever. Virtually every digital ad tech platform leverages some type of predictive modeling methodology. For example, all DMP providers claim the ability to do real-time predictive analytics using a variety of methods, from traditional regression-based models to machine learning to proprietary algorithms. Additionally, personalization technologies do similar things with recommender systems, using tools like collaborative filtering algorithms.
However, it appears to me that we have now confused two very separate concepts – the mere presence of analytics versus analytic effectiveness. Meaning, it appears that marketers now assume that if we introduce an analytic methodology into our systems, we no longer have to validate the effectiveness of that prediction itself. The most blatant example to me now is the application of look-alike modeling in digital audience creation. The idea of look-alikes is very simple – take a known audience and predict individuals that look like that audience in a different population. However, there is a question of how well or strong the prediction is. You can build a model to predict anything. But if the model is flat, it is basically no better than random. Today, there are simply too many conversations where marketers assume that because we can build look-alike audiences, we should use them, even if the prediction is horrible. And this happens, in my opinion, because of a lack of understanding of the analytics itself and how these predictions are made.
In summary, as someone rooted in statistics, I am obviously not opposed to the transformation of marketing into a much more disciplined and quantitative field. However, with any transformation comes the need for education and enlightenment, as we must not equate the mere presence of analytics with good analytics.