In speaking with people about the KVC framework, I have come to realize that some people may have misinterpreted it to mean that all data leads inexorably to the production of value. That is regrettably not the case! What the model says is that in order to produce value, the data has to be used in making decisions and taking actions — an instrumental view observed mainly in the breach, i.e., when it’s not the case. The KVC is thus a prescriptive, not a descriptive, model. It prescribes best practice, not necessarily describing how things are.
Organizations typically measure nearly everything — they’re awash in data. That’s a good start. The downside is that this usually leads to “infobesity,” an overload condition wherein they’re not using most of it effectively. The key to success is not the number of metrics captured — it’s the relationship of those metrics to the production of enterprise value, however that is defined by the organization. In fact, a smaller number of key metrics that combine to create a clear picture is always preferable to a larger number that create a fog of data.
The key, then is knowing what those key metrics should be. In my previous post, I described the value of identifying and using metrics that are predictive, in preference to those that describe what happened after the fact. The power of this simple principle is illustrated in a case example ripped from today’s headlines — the metrics describing the global Covid-19 pandemic.
It’s interesting that (1) we all worldwide experienced the same virus at roughly the same time, and yet (2) the responses taken in different countries varied significantly — naturally leading to very different outcomes. Though real life is not a controlled experiment, it is nonetheless a living laboratory worthy of careful observation and documentation from which we can draw hypotheses, if not conclusions.
I’ll preface my remarks by noting that, while it seems that apples are being compared to apples in various reports, this is almost certainly not the case “under the hood.” It’s been documented that different countries, different US states, and even different medical providers have somewhat different ways of counting each metric, including key metrics like the cause of death.
That said, my informal observations have led me to the hypothesis that much of the actual variation is due to the systemic difference in the policies and practices of testing — the ultimate source of Covid-19 data. Compare the following selected areas as of June 15:
GEOGRAPHY | POPULATION | C-19 CASES | C-19 DEATHS |
World | 7.6.billion | 7.9 million | 434.8 thousand |
USA | 329.8 million (4.3%) | 2.1 million (26.6%) | 115.5 thousand (26.6%) |
South Korea | 51.6 million (0.7%) | 12.1 thousand (0.2%) | 278 (0.1%) |
China | 1.4 billion (18.4%) | 84.8 thousand (1.1%) | 4.6 thousand (1.1%) |
SOURCES: CDC, WHO, NETEC
These results are simply staggering. In a nutshell, the US, with a little over 4% of the world’s population, has experienced over 26% of the world’s C-19 cases and deaths. By contrast, China, with over four times the US population, has reported 4% as many cases and deaths as the US — an orders-of-magnitude difference.
Here is a simplified flow model of the Covid-19 process. Most of the metrics reported worldwide are taken at one of three points: Point 1 – tested numbers of cases, Point 2 – number of hospitalizations, and Point 3 – recovery or death. In the US, most of the reported metrics reflect Points 2 and 3 — they are scary and perhaps do provide a negative incentive for people to take preventive measure like distancing and wearing masks. Seen from a population-wide perspective, though, they are lagging indicators — coming too late to be acted upon effectively.
The Point 1 indicator (virus test results) is a leading indicator, in the sense that if someone tests positive, he or she can (1) go into quarantine to avoid infecting others, and (2) be contract-traced to determine who else may have been infected. In other words, as a leading indicator, it change change behaviors in a desired way.
This is especially important in the Covid-19 situation, since, though it’s highly contagious, a high proportion of people who are infected and can infect others show few or no symptoms. The US policy of waiting until someone shows symptoms to test is fundamentally flawed in that regard — it ignores the societal consequences of having asymptomatic “spreaders” walking around. Astonishingly, this policy is still in place today in much of the US, and at least partly accounts for the poor outcomes of the US relative to others.
Predictive data provides granular intelligence that enables as-needed, narrow-bore responses to be mounted quickly — as Beijing as just done (June 2020) in response to new flare-ups. Without testing data, we in the US are left with only aggregate, grossly disproportionate responses — analogous to using a howitzer to kill a housefly.
Inn the Covid-19 case, having the right metrics — the leading or predictive ones — saves lives compared to relying on just the lagging metrics. Equally importantly, it means that economic activity can be maintained throughout — or scaled back in a targeted, minimally invasive way — rather than shutting down huge swaths of the economy for several months, which was the tragic result in the US.
As Wuhan, China opened up in May 2020, they tested nearly all 11 million residents in the entire city — some more than once — within a short time. Was this program expensive to develop and administer? I’m sure it was. Was it expensive relative to keeping their economy shuttered? I’m willing to bet that this program, like the best data-intensive initiatives, showed a high ROI in both economic and human terms.
Relying on lagging indicators is like driving on the highway using only your rear-view mirror. For business enterprises, the same lesson holds true. If you don’t have the right data, it’s hard to succeed in business. Conversely, having the right data — and using it effectively — are the first two steps toward producing enterprise value.
Comments RSS Feed