What Freestyle Chess Teaches Us About Marketing Investments ... or Why Trying to Mate with Technology May Not Work

"My favorite victory is when it is not even clear where my opponent made a mistake." —  Peter Leko, the first person to earn the rank of Grandmaster before turning 15.

Marketers looking for a competitive advantage would be hard pressed to find anyone who thinks Big Data is not a great place to find it. Given the explosive growth in readily available information, faster processors, and low storage costs, beating the other guy should be as simple as making sure your analysts and your technologies are stronger than theirs. The folks who develop chess-playing engines used to think so too.

In 1985, Garry Kasparov (probably the greatest chess player of all time) faced off against 32 of the best chess engines simultaneously. Five hours later he had a perfect 32-0-0 record; no one was surprised. Though these computers could analyze more options than Kasparov, it simply confirmed that they were still too weak to match him on strategy.

In the following years, the number of moves a computer could analyze grew exponentially, widening the analytical advantage chess engines held over humans by several orders of magnitude. So much so that in 1997, when Kasparov faced off against a single computer, Deep Blue (admittedly, a $10 million computer), he was beaten. The technologists rejoiced, declared it the end of an era, and predicted that chess would be solved in the very near future. That is, since the number of possible board configurations to evaluate is finite (10121, at the extreme), it would be only a few short years until technology was powerful enough to play the game with such mathematical perfection that it would never lose, even when under a tournament’s time constraints. Simpler games such as Connect Four and Checkers have been solved in this sense. Their declaration was right, their prediction was wrong.

Shortly after his defeat, Kasparov started a new form of chess — one where humans played with technology instead of against it. The most radical version of this experiment is freestyle chess. The board is set up the same as it is in regular chess, the pieces move as they always have, and players have a limited time to make all of their moves. The difference is there are no rules about whom or what can play. Any grouping of humans and/or computers can form a team, and there are no limits on the skill or power allowed on a team — basically a team can use anything and everything it wants to give it the best chance to win. The goal was to see who would win when chess is played on the highest possible level.

In June of 2005, one such freestyle tournament (played entirely over the Internet) generated a lot of buzz. For starters, the computer-only teams were eliminated pretty quickly — even the most advanced super computers were no match for a human with a decent laptop, a mild surprise. With only human-plus-technology teams left (there were no human-only entrants), what happened next was completely unexpected. An anonymous team beat technology-assisted grandmaster after technology-assisted grandmaster on its way to winning the tournament. The rumor mill saw this as evidence that Garry Kasparov (who had retired from competitive chess only a few months earlier) was back and likely using a military-grade super-computer. Sure, it seemed odd that Kasparov’s retirement would be so short, but he was a bit eccentric and had been a strong proponent of computer-assisted chess from the start; plus, how else could you explain unknowns routing teams led by top tier professionals and assisted by computers?

The truth was much more fantastic. The anonymous team turned out to be a part-time soccer coach paired with a full-time database administrator (each ranked more than 1000 points lower than the grandmasters they had beaten), and their technology was nothing more than three run-of-the-mill computers with commonly available chess programs. This was the chess equivalent of your local high school team beating the Yankees ... five times in a row.

How did they do it?

The answer was simple: better process. If stronger technology mattered most, the super-computers would have won. If stronger humans were the deciding factor, the grandmasters would have won. The key to victory was the only thing left, the process used to decide which move to make.

There was no magic in the individual steps the amateurs took to decide which move to make. They studied their position, they developed a list of candidate moves, they graded those moves, and then chose the one with the most upside ... just like everyone else. What made their process better was its relentless efficiency in getting the right information to the right place.

By making it quick and easy to exchange information (computer to computer, computer to human, human to computer), the amateurs could distribute the work to the resource that did it best (e.g. humans are better at winnowing consideration sets and resolving conflicting recommendations, computers can accurately forecast the impact of moves much further into the future); and because they spent comparatively little time giving each resource the information it needed, each resource had more time to do its task better. As the games developed, this tipped the scales heavily in the amateurs’ favor, giving them a much deeper understanding of their current position and the long-term ramifications of each move.

As the volume, velocity, and variety of marketing data continue to grow, so will the opportunity to get a leg up on your competition. In response, some will hire the brightest analysts, others will restock their data centers with world-class technology, and there will be companies that do both. However, the smart money is on those who focus on building systems that integrate the two in a way that multiplies the strengths of each.

Join the Discussion