It makes my day when I am asked a question I can’t answer completely and easily. The students in Columbia’s Information and Knowledge Strategy program rarely fail to disappoint in this regard.
One IKNS student has been using the KVC framework in her “day job” to design program evaluations for a global NGO. Her question had several aspects: “Who designs Key Performance Indicators? Is that job, a profession, or what? How and where do you learn how to do it? How do you know it’s right?”
My first instinct was to answer from my personal experience. Shortly after I left college — in The Digital Dark Ages — I was tasked to design a set of performance metrics for day care agencies under contract with the State of New Jersey. I was officially a “Contract Administrator”, not a metrics designer. I didn’t know what I was doing — metrics design was just part of my job. I found a book on it, and somehow managed (like the one-eyed man in the Land of the Blind) to pull it off to everyone’s satisfaction.
KPIs reached industrial strength with the development of the Balanced Scorecard in the early 1990s. People had begun to realize that financial performance metrics only took you so far — but typically these were the results of other events and activities that were operational in nature. Measuring those operational things (customer satisfaction, for example) was seen to have its own value. Metrics became a mini-industry — businesses were created to design and implement KPIs and (later) enterprise dashboards to monitor them.
In all this, there’s good news and bad news. The good news is that enterprise management has entered a new age of empiricism. Everyone wants to be evidence-based and metrics-driven, instead of gut-feel-and-instinct driven as previously.
The bad news is that in our quest for metrics, we are relying heavily on our ability to find the right metrics. Metrics do not grow on trees; they require resources (people, time, technologies) to develop and to implement. They themselves are investments, each having some (greater or lesser) ROI.
Metrics have major consequences — for people’s job performance, evaluations, compensation, and so on. So much so that there is a risk of “playing to the numbers”, wherein clearing the metric bar becomes as important as — or more important than — the underlying performance that is being measured.
Entire industries typically follow a standard and relatively narrow set of metrics in competing with each other. When one player bucks the trend and re-thinks the “metrics that matter”, it can have a transformative effect on that player’s strategic fortunes — and, eventually, on an entire industry.
When one major league baseball team (the under-capitalized 1982 Oakland As) adopted a new set of metrics (sabermetrics), they began to win more games than other teams who had not yet changed. Michael Lewis famously dramatized this in his book Moneyball, which I recommend as background to anyone designing business metrics.
Now that this set of insights has permeated the pro baseball industry, it is less of a competitive advantage than it was then. The quest for analytic dominance should be an ongoing process, not a one-off project.
Measurement is good — IF you are measuring what matters, strategically and tactically. If you are not, measurement is at best a waste of resources — and at worst a distraction from seeing what actually does matter.
I recently discussed with my Columbia students the issue of selecting optimal metrics for Knowledge Management initiatives. I used the meme of shortening the path to value. First I drew the distinction (illustrated below) between “output” metrics and “outcome” metrics — with both being benefits-based, but with the latter having the added value of measuring along the same dimensions that key stakeholders would measure — for example, the C-suite or investors.
The other distinction I drew is between results that are more easily measured (and, therefore, more easily demonstrated) than others. The matrix below shows the results for eight benefits frequently mentioned in studies of Knowledge Management projects. (I used a recent Accenture study as the primary basis, supplemented by my own professional experience.)
These benefits range from readily measurable and outcome-based (upper-right quadrant, like cost savings) to relatively problematic to measure and output-based (lower-left quadrant, like better decisions).
The upper-right is clearly where one finds the highest return. I argued that the most direct path to the upper right quadrant is the “path of least resistance” when it comes to justifying projects and budgets. This is where you want to be when the next business cycle-driven downturn hits, and budgets are cut back.
This principle could apply well beyond knowledge management projects — to most any enterprise activity or initiative. You should always be “moving the needle on enterprise value” — in aspiration, if not in fact.