I love numbers.
I first realized this around the age of ten, when I started collecting baseball cards pretty seriously. On the back of each card was a bunch of numbers, each player’s ‘stats’, important metrics of how well he played. Hits, batting average, runs batted in (RBIs), runs scored, and so on.
It later developed that much of what we took as gospel was wrong. Not wrong, really — it’s just that there was a more productive way of looking at the game. I’m speaking of sabermetrics, ‘the empirical analysis of baseball’ popularized in the book and movie Moneyball.
The main implication of sabermetrics was that the numbers that everybody knew and had been following were not always the best for predicting the ultimate value goal: winning games. Winning is the means by which professional ball teams (which are businesses) fill stadiums and make money. By correlating each player’s performance with how that player contributed to that value goal, a much better set of metrics was developed.
The team that first operationalized these insights (the early-2000s Oakland Athletics) was able to put together a winning team on a player budget representing a fraction of that of, say, the NY Yankees. Once their story was written, other teams started using the same system, and the comparative competitive advantage eroded.
Non-baseball businesses are much the same. Everyone (whether line or staff) lives and dies by their numbers — and in any given industry, most companies live and die by more or less the same set of numbers. These metrics tend not to change much year-to-year, and eventually become ‘baked in’ as part of the operating culture of the company.
Companies pride themselves in being data-driven. But I believe what they are signalling is that they’re driven by things as they are. (We’ll be seeing a lot of this phrase, so let’s abbreviate it TATA.) When TATA change, the data change, and the company (ideally) responds or adapts in reaction.
The hidden assumption here, of course, is that the data set completely and accurately represents TATA. This is true only in the most ideal of worlds. In reality, the two are approximations of each other — an approximation whose inexactness tends to grow in magnitude over time as informational entropy sets in.
The assumption that data completely represents ‘things as they are’ is misleading, sometimes dangerously so. It can have extremely unpleasant consequences, for example when TATA somehow escape being captured in our data set (as happened in the 2008 housing market bubble and subsequent meltdown.)
‘Big data’ is not the solution here. No matter how large our data sets, and how large our capacity to analyze those data sets, there will always be a significant set of things that we are not measuring. And those things we are measuring may no longer represent TATA as accurately as they once did. The obvious solution is to regularly reexamine and revise the list of things being measured.
No doubt you’re wondering what reality theorist Plato had to say on this. His famous ‘cave’ dialogue in The Republic is helpful here. The citizens in Plato’s cave have grown up seeing shadows cast on the wall by a fire in the cave, and have come to accept them as TATA. To them, the shadows cast on the cave wall are reality. When they move out of the cave and for the first time see things as they actually are in the sunlight, they are temporarily blinded. They refuse to believe their eyes — and argue that the shadows in the cave are the true reality.
This story has been retold many times in many ways in the intervening 2500 years, most recently being the foundation of the film The Matrix.
Our numbers — the interlocking sets of metrics that we follow as a representation of the reality of our organizational situation — are like the shadows on the cave wall. Those of us who produce the numbers endeavor to make these representations — these competitive signs, signals, and simulacra — as accurate and as representative as possible. But ‘the map is not the territory’, as the expression has it — and we need to remind ourselves of that, and make compensatory adjustments, on a regular basis.
Many organizations are in a similar situation. They crank out the same numbers year after year, watch them closely, report them, make decisions based on them, and even gauge their incentive systems on them.
But the numbers, though they may be accurate, may no longer reflect the competitive situation as it is. And efforts to change the metrics may be met with strong resistance — as was the case in Moneyball.
When AG Lafley first became CEO of Procter & Gamble in 2000, he found his top managers depending entirely on their printouts and management reports to gain a sense of customer sentiments and competitive activity. But he sensed that those reports no longer represented the true competitive reality.
In order to break these habits of information — and thus escape the cave — he required his top lieutenants to literally go into the field and visit customers’ homes as they used products made by P&G and their rivals. Only then could they gain a ‘fingertip feel’ sense of the competitive landscape.
P&G produced record performance under his leadership.
The essential Moneyball lesson is not (as some say) that metrics should be used in making decision. They should — but you already knew that.
Moneyball’s true lessons are that (1) they need to be the right metrics, as defined by, and supportive of, your enterprise value system; (2) there is a pretty wide margin of error for defining this non-optimally; and (3) metrics carry their own inertia that resist efforts to change them.
You can measure just about anything — but any given metric will create value only to the extent it reflects current (and future) competitive realities. If those realities have changed — as they continually do — it may be time to re-calibrate your KPIs and other strategic metrics.
Comments RSS Feed