Professors of marketing at the Kellogg School, Eric Anderson and Florian Zettelmeyer, recount a story called the Analytics Paradox
“A YOUNG firm starts out making many mistakes. Eager to improve, it collects lots of data and build cool new models,” says Eric Anderson.
“Over time, these models allow the young firm to find the best answers and implement these with great precision. The young firm becomes a mature firm that is great at analytics. Then one day the models stop working. Mistakes that fueled the models are now gone and the analytic models are starved.”
The paradox is that the better the firm gets at gleaning insights from analytics – and acting on those insights – the more streamlined their operations become. This, in turn, makes the data resulting from those operations more homogeneous. But over time, homogeneity becomes a problem: variable data – and, yes, mistakes – allow algorithms to continue to learn and optimize.
As the variability in the new data shrinks, the algorithms don’t have much to work with anymore. The paradox, says Florian Zettelmeyer, leads to a rather startling recommendation: Occasionally, you need to purposely mess stuff up.
“You design variation into your data in order to be able to derive long-run insight.”
Zettelmeyer and Anderson are academic directors of Kellogg’s Executive Education programme on Leading With Big Data And Analytics; they are also writing a book about data science.
They offer a look at how the best firms have found a way to sidestep the Analytics Paradox.
In some sense, the value in big data lies in its messiness – in the often unexpected variation of how events play out, and the myriad ways these events help establish connections between variables that can help people make better decisions.
“In theory, the best manager for analytics is the one who walks into the office every morning and flips a coin to make all decisions,” Anderson says, “because if you make all your decisions by flipping a coin, you will generate the best possible data for your analytics engine.”
The problem, he adds, is that managers flipping coins tend to get fired very quickly. “The managers who survive are the ones who are really good at implementing decisions with great precision.”
To understand how the best teams can find their operations too optimised for their own good, Anderson offers this hypothetical example.
“Right now, your company offers two-day delivery, and someone says to you, ‘I would like you to go back and analyse the historical data. Tell me whether we should have two-day delivery or move to one-day delivery.’ Could you answer that question with your data?”
If your delivery process is being overseen by a high-performing team focused squarely on efficiency, then you probably cannot answer this question with data.
“If you are really good at delivery – if you’ve been running operations efficiently – how many days does it take?” asks Anderson. “Two days. The guy who was messing up and taking four days to deliver a package was fired. The one who was delivering in three days sometimes and one day other times got fired.
“You’re left with all of the managers who deliver in two days – you’ve built an organisation that is so good at delivering things that it almost always happens in two days.”
Hamstrung by your own success, you do not have the data to know whether a better possible delivery strategy exists, or how you might successfully move to a new model.
“If I don’t occasionally do the wrong thing, I will never know whether what I think is the best actually still is the best,” says Zettelmeyer.
Of course, firms have plenty of good reasons for not wanting to reward incompetence, or promote a manager whose decision-making seems limited to coin flips.
Instead, top firms have adopted a fundamentally different strategy for thinking about big data.
“The best firms are heavily investing now in creating data, designing data,” says Anderson. “They’re purposefully injecting variability in the data.”
Whether they are experimenting with how many days it takes to deliver a package, how to set prices, or how to best maintain an aging fleet of vehicles, these elite firms understand that experimentation and variability need to be built into the organisation’s DNA.
Just a fraction of firms, maybe five percent, are doing this, says Anderson. So what do most managers need to do differently?
“When you take a business action, you need to keep in mind what the effect is on the usefulness of the data that are going to emerge from it,” says Zettelmeyer.
That requires the foresight to understand the questions you may wish to answer in the future, as well as the discipline to work backward from those questions to ensure that you set yourself up to get data that are rich and helpful.
A company rolling out a national advertising campaign, for instance, might decide to tweak the campaign in important ways only in select markets, or to stagger the roll-out by region. While there may be short-term costs in terms of efficiency and optimisation, the resulting data have the potential to teach the company going forward.
Such foresight cannot be the purview of a single employee or team at an organisation, the pair stress. That’s because decisions about how to experiment should be made with specific problems in mind.
“It cuts across the whole organisation, so it has to be a cultural change in how we think about our day-to-day operations,” says Anderson.
The key, Zettelmeyer says, is “to transport yourself into the situation you’re going to find yourself in in the future.” What data would be helpful to have in order to make the next decision, and the next one? What relationship between variables do you want to demonstrate? And how could you design an experiment to demonstrate that link, given your existing capabilities and constraints?”
And keep in mind that the infrastructure this requires may be quite different from what is necessary for managing much of the big data that flows through an organisation. For instance, the high-level dashboards that senior leaders are used to may not be capable of distinguishing among many subtle but important differences in when a campaign was rolled out, for instance, or how a delivery route was established.
“It’s a very different thought process in terms of how you would actually build an IT system to support experimentation,” says Anderson.
Thus, rather than trying to outsource this work to a dedicated data-science team – or worse, a single piece of software – Anderson and Zettelmeyer recommend that firms train managers on how to think, and ask questions about data.
“It requires a working knowledge of data science,” says Zettelmeyer. “This is a skill set that managers need in order to even be conscious that this is something they need to take charge of.”