I had the great pleasure recently to address the students in Guy St. Clair’s class (K4301) at Columbia University’s Information and Knowledge Strategies (IKNS) program. These are tomorrow’s leaders in developing and managing knowledge-based strategies.
Of course I told them about the Knowledge Value Chain, and was gratified by their positive reactions. I also got my most-received question — one that over time has also become one of my most challenging.
That question is, “What is your favorite metric for knowledge?”
It’s a logical question, and one that it would be useful to know the answer to — if it could be answered definitively. But there is no single correct answer. And it’s challenging, in that it may signify that I have fallen short in my efforts to communicate my core message: integrating knowledge tightly with business processes.
In the KVC Handbook 4.0 (upcoming) I identify three classes of metric that can be applied to knowledge processes and assets: inputs, outputs, and outcomes.
Inputs are (not surprisingly) what you put into the process — resources like money, people, time, and effort.
Outputs are ‘what you get’ for the input — like web site pages produced, documents captured, users served, and reports produced. The ratio of Outputs to Inputs measures the efficiency of a process.
While these are all valid measures, they each fall short of measuring the impact of our investments and efforts on our organization’s goals and strategies. These are Outcomes — benefits, value received, impact, and business results. The ratio of Outcomes to Inputs measures the effectiveness of a process.
As we move our metrics toward the right in our diagram, we move from measuring investment toward measuring return on that investment. The farther we go to the right, the more our knowledge metric approximates the business metric that it supports — and the stronger is our business case and ROI claim.
Consequently, my advice is to — rather than create new metrics for knowledge — use those business metrics already in place. If for example a knowledge process supports sales-related decisions, its value derives from the incremental sales that result from its deployment.
Though this may seem obvious, it is rarely practiced in the real world. Why?
Claims of such benefits may be easily distorted or exaggerated, and may even move the claims-maker into a position where his or her credibility is questioned. As a result, people tend to measure outputs, rather than outcomes. Outputs are typically more tangible and therefore easier to measure than outcomes — but they fall short of measuring impact — way short, in most cases.
In short, too often what we are measuring doesn’t matter — because what matters, we can’t measure. This is the core conundrum of knowledge metrics.
Another option is to measure proxies for outcomes — typically consisting of client/user opinions of the value added by a knowledge process or product. This is a pretty good approach — provided that it explicitly includes reference to how the knowledge contributed to business metrics and problem solutions. In other words, it’s not just a question of whether the knowledge service was good, bad, or indifferent — but rather how much (on a quantitative scale) it contributed, and what the value of that contribution was.
My best answer to “What is your favorite metric?” is “The metric that my user/client uses to measure whether his or her problem has been solved, with my help.” This approach links and aligns the value and knowledge parts of the process.
When our client wins, we win.