I pay close attention to feedback I receive on the KVC and other analytic frameworks we are developing. Many times I make revisions based on this feedback — that’s why the KVC Handbook is now on its fourth major edition.
One of the things I’ve heard is that the KVC model is too idealistic. Even as I confess to being idealistic by nature, I think that’s a fair criticism. And thanks to your feedback I use this blog (and my Knowledge Clinics) to address things not in the current edition of the book.
In the private sector, examples of damaging deviations from the ideal are as easy to spot as this morning’s Wall Street Journal. Last month, for example, I outlined the issue of what happens when the Knowledge Value Chain is broken by chance — or corrupted by intention.
This month we examine a case that has been in play for a while, from public affairs in the US. (Though even our readers in South Africa and elsewhere should take note — things like this could happen there too!)
In general the issue is the reliability of intelligence in an active war theater — here the ongoing actions against ISIS. Does this sound familiar? It should — read my earlier post about General Michael Flynn’s criticism and subsequent reshaping of the intelligence effort in Afghanistan.
And those of you who (like me) are baby boomers will remember this issue as it played out in Viet Nam.
Like the private sector examples last month, the principle at issue here is what I call cooking the chain. That is, determining first what the desired outcome is, then selectively gathering data that supports that “conclusion” to the exclusion of other more feasible alternatives.
In accounting, this is known as cooking the books. In social psychology, it’s related to what’s called confirmation bias. You decide what the answer should be, then you backfill and/or selectively choose the data to support that answer. You may even need to twist, distort, recast, or spin the data — fill in your favorite variation — to meet your needs.
This is, of course, a complete perversion of the admittedly idealistic KVC model, which recommends planning your data collection process from the top down — but not actually shaping the data itself to fit a foregone conclusion.
This is what was alleged to be happening with US Central Command (CENTCOM) intelligence about ISIS. This is serious business, since intelligence is designed to support high-level policy decisions. As the report so directly states, “Analytic integrity is crucial to good intelligence, and good intelligence is crucial to making informed policy judgments.” While Congress was being formally told that ISIS had been reduced to a “defensive crouch”, the realities in the morning news indicated otherwise — the frequent made-for-cinema horrors accompanied by the captures of the Iraqi cities of Ramadi (which has since been taken back) and Mosul (which has not, as noted on the map below).
We now know that the analysts assigned to the situation knew better, too. This is because one of them initiated a whistleblower action that resulted in the convening of a US Congressional Task Force to examine his allegations. That group issued an unclassified report on August 10, 2016. Just below the surface of the dry, bureaucratic details and gov-speak are some fascinating revelations, chief among them that:
The Task Force finally reports being “troubled” that, despite the complaint filed in May 2015 and “alarming [internal] survey results” that followed in December 2015, nothing significant was done by those responsible to correct the situation. On the contrary, the report notes that the CENTCOM leadership downplayed the significance of these events, calling such allegations “exaggerated”, and so far has not cooperated fully with some of the information requests from the Task Force.
The report details several changes made during 2014 that, though purportedly intended to improve the intelligence process, had the opposite effect of biasing it. These changes included:
The report implies that the systemic bias resulting from these changes originated from the then-current leadership of CENTCOM intelligence — described by their own analysts as “risk-averse and unwilling to accept uncertainty in intelligence analysis”. Dissenting opinions, highly valued in the intelligence culture, were to be discouraged.
In a rare display of optimism, the report notes that, by the time of its release, these leaders had been replaced and certain of the problems addressed.
What would motivate someone to cook the books on something as important as intelligence related to our national security? These are serious allegations that could easily end careers if found to be true. Though it mentions in passing the successive organizational changes made in CENTCOM at the time, the report stops short of concrete answers — or even speculation about such explanatory details. Maybe there is also a classified version that contains these? To be fair, the report does imply that this investigation is ongoing — so we’ll stay tuned.
There are plenty of historical analogues for this that could be instructive. And certainly businesses are not immune to this — there is an almost cosmological force that propels “happy talk” to the top of the organizational scrum, while too often bad news is suppressed or tweaked beyond recognition.