Analytics and Forecasting

The Flood: An Analysis

17 Jan 2013  

In my previous post I told the admittedly harrowing story of my near-demise in a flood caused by ‘Superstorm’ Sandy.  I consider this a personal intelligence failure of epic proportions.

Why would I — who sell my research and advice to companies on things related to threat, early warning, and shifting trends — publicly acknowledge this personal shortcoming in my own behavior?

You could say it’s part of the confessional ‘new intimacy’ in journalism.  But frankly I sense that my experience, far from being atypical, is characteristic of the way individuals and organizations (groups of individuals) misuse ‘intelligence’.  These are mistakes most of us make much of the time.  Only if we can harvest a little insight about why they happen, can we begin to cut down on them.

In other words, I had inadvertently turned my life into a field experiment in applied intelligence — albeit one completely uncontrolled.

A KVC analysis

In my monograph “The Knowledge Value Chain:  How to Fix it When It Breaks”, I put forth the idea that intelligence, far from being a ‘cycle’ as often portrayed, is actually a linear connection that leads from data up through intelligence and up through the production of value.  One of the axioms of the model is that it’s essentially serial in nature—each step must be adequately fulfilled in order for the whole process to work.  If a link in the chain breaks, the chain itself breaks.

KVC troubleIn this case, the bottom of the chain, DATA, was very much in place:  I had plenty of data.  As background, I had seen the movie An Inconvenient Truth when it first came out, and will always remember the scene simulating the overflowing of the Hudson River into downtown Manhattan (which is where I live, and which is essentially what happened.)  More immediately, all day on Monday, October 29, 2012 there were Twitter feeds (especially from @EricHolthaus) about where Sandy currently was, and what the effect on tidal levels in New York Harbor was likely to be.  In hindsight, these alerts were amazingly accurate.

So where was the break?

The KVC model defines ‘intelligence’ as knowledge made available to someone with the power (i.e, authority and responsibility) to act on that knowledge.  That happened in this case, too, given that I was both the intelligence gatherer and the ‘executive’ agent in charge of doing something with and about it.

The DECISION and ACTION steps that follow intelligence, and lead directly to the production of VALUE, however, did not happen.  Like many New Yorkers and other people affected by Sandy, many took forecasts that this would do horrendous damage, and filed them under ‘wait and see’.  When the rain that had been predicted by the National Weather Service for Sunday morning failed to materialize — even by Monday afternoon — many of us assumed that the rest of their forecast was also incorrect.  Itself incorrect.

Sad consequence

A tragic consequence of poor intelligence

I attribute this breakdown to three major causes. The first of these I call the cocoon of consensus.

The cocoon of consensus

While some argue that crowds have wisdom, others (and I often fall more in this camp) have argued that, on the contrary, crowds breed collective madness.  One of mankind’s major characteristics is that he/she is a social animal—to the point where social acceptance and ‘belonging to the tribe’ sometimes trumps all else.  (See Facebook et al.)

My neighbors are mostly smart, highly accomplished people with above-average wealth and other resources.  All of my neighbors, like me, had a lot to lose by doing nothing.  Yet all did virtually nothing.  Some even compared notes before they did nothing — thus ‘cocooning’ themselves into a consensus that doing nothing was OK, that it was the smart choice to make.

Why?  At least partly, we had been through this ‘worst storm in history’ drill several times before in recent memory.  Like the villagers in Aesop’s fable “The Boy Who Cried Wolf”, our receptors for messages of doom and disaster were simply fatigued.

That brings us to our second major cause:  dynamic thresholds.

Dynamic thresholds

In perceptual psychology, there is a principle that I believe can be applied in understanding phenomena related to ‘group perceptiveness’ or intelligence:  the threshold.  A threshold is a perceptual boundary that is related only loosely to a physical boundary.  In testing people’s hearing, for example, the audiometer measures the relationship between a physical audio signal (in millivolts) and a person’s perception of that signal.  When the person raises his hand that he hear the signal, that’s the threshold at that particular frequency.  It’s a psychological measure, not a physical one.

The same is true in instances of group perception (= intelligence).  But here it’s confounded by the additional constraints governing social acceptability (like cocoons of consensus.)

Because they are psychological, thresholds can move based on various factors — among which prior dis-confirming experience ranks very high.  The people of New York City were clearly told on several previous occasions that a storm of mammoth proportions, never seen before, blah blah blah, was approaching.  And little happened.

I remember taping the windows of my then-new office (must have been 2008’s Hurricane Hanna), and sandbagging the windows on the downstairs of our apartment during 2011’s Hurricane Irene.  The damage in both cases was non-existent to minimal.

We were de-sensitized.  Like Aesop’s hapless villagers, we subconsciously raised our thresholds for this information.  ‘They’ve always been wrong, they’ll be wrong again this time.’  A dynamic threshold implies that it would have taken even more this time to convince us that really, no REALLY, this time was IT, please ignore the previous bobbles.

Our thresholds were too high.  We listened, but did not hear.  This was made easier because our alerts were mediated.

Alert intermediation

The third, perhaps weaker, cause was an absence of a LOCAL VOICE to all of this information. Yes, I read the rapid-fire tweets, including from the NYC Mayor’s Office – but, in our neighborhood, there was no direct local voice.  All of our ‘alerts’ were intermediated, is this case by TV and the Internet.  This made them seem less relevant and ‘actionable’.

Nothing came from our building’s Board or Managing Agent (as it did in previous years).  Nothing came from the NYC police, fire, or other first responders.  Our neighbors (as I mentioned above) individually and collectively said little.

In some neighborhoods where (non-mandatory) evacuation orders were given, I’m told that sound trucks went around, and in some cases there were even door-to-door appeals.  But our neighborhood was not one of those.

My tweet of October 30, 2012 read, “I’ve paid a steep price for not trusting my instincts and the intelligence available to me.”  I humbly commit this episode to the considerable body of literature on intelligence failures, and how to avoid them, with the wish that it may help you and your organization—be it your family or your business—avoid such disasters.

Photo copyright © 2012 Tim Wood Powell.  All rights reserved.


Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Comments

  • Tim Powell on A brief pause: “Thanks, Les. My frequent conversations with you over the past year or two has helped my thinking a lot in…Jan 15, 13:07
  • Les on A brief pause: “Excellent advice thank you for your terrific reflection piece Tim!Jan 15, 09:40
  • Glenroy London on Knowledge Erosion: How to Avoid It: “Hi Tim I am knoco caribbean. About to join the global km family. Exploring km frameworks for design, development, implementation,…Jul 12, 08:52
  • Tim Powell on War of the Words: “Glad to oblige TJ — and thanks for your note — but I do encourage to try it for yourself…Jun 23, 08:10
  • T J Elliott on War of the Words: ““Chat credited me with founding and/or leading 20 different companies and writing 13 books. In fact, I founded one company…Jun 22, 22:42