We use cookies to personalize content, to provide social media features and to analyze our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. For information on how to change your cookie settings, please see our Privacy policy. Otherwise, if you agree to our use of cookies, please continue to use our website.

When "Statistically Significant" Isn't

More and more online marketers are doing more and more testing. There's blogosphere buzz around testing offers, testing web page design, testing Adwords copy, etc. And all this testing is a Very Good Thing, for well-designed tests can literally transform your business. Question: When you get a "statistically significant" uptick from a test, is it always a winner? Answer: Usually, but not always. There are three situations when your stats software will bless a set of results as "statistically significant", when really they're not. Huge sample, Small Effect The larger your test sample (impressions, clicks, catalogs mailed, whatever), the smaller effect you can detect. It is a little known fact that if a test is really huge, you'll nearly always find a statistically significant difference between the control and test cells. The problem is that the difference may be too small to have any practical business significance. For example, with two cells of 10,000,000 apiece, a 1.01% response rate is statistically different than a 1% response rate (t=2.24, p=0.03). However, a single basis point difference has no business impact for the typical direct marketer. Takeaway advice: Make sure statistically significant effects are large enough to have business significance. Appropriate Sample, Huge Outlier Most statistical tests rest on an assumption that noise in your test is normally distributed. This is a usually a great assumption, but sometimes isn't true. Under a normal assumption, about 95% of the data should fall within 2 standard deviations of the mean, 99.7% should fall within 3 standard deviations of the mean, and you should never see data 5 or 6 standard deviations out. When a stats package sees a 5 or 10 sigma event, the software quivers with excitement and starts ringing happy bells. But if the assumptions about the error model were wrong, you could be led to make a bad decision (hopefully not as significant as Bear Stearns recent loss of $1.6 billion). Takeaway advice: Check your data for outliers. For direct marketers, an outlier is often a single gigantic order, making whichever test cell was lucky enough to receive it look like a grand slam. If you find atypical events are driving your significance, toss 'em out. Appropriate Sample, Small Time Period Most statistical tests rest on an assumption that noise in your test is stationary, which is a fancy term for "not changing over time." A retailer with a high traffic site running a MVT test could see a statistically significant winner in a day or two. However, if all the data came from three weekdays in first quarter, you don't know if those results will hold on weekends, or in Q4. Takewaya advice: Make sure your tests run long enough to be representative. During the holiday peak, roll out early winners quickly (so as to not miss opportunity), but keep a small holdout back-test to confirm your early results. • • • Direct marketing testing is both art and science. The science is designing good tests and running the stats. The art is knowing what to test, how to interpret results, and how to use the findings to significantly improve your business.

Join the Discussion