A reader responded to yesterday’s post, and so I’ll offer some clarification. First, the main criticism:

I’m not sure I fully agree with the dichotomy you use, however. Isn’t a known unknown (e.g., I “know” that a particular factor might influence another factor, but I have not yet tested how it might do so empirically – ergo a known “unknown”) essentially the same thing as an unknown unknown (e.g., I “know” that its possible that a factor I have not yet considered might influence another factor, but I have not yet tested how it might do so empirically, both because I don’t yet know what the factor is, much less how to test it empirically – ergo a “known” unknown)?

The difference doesn’t seem to me to be one of kind, but rather degree (and ultimately the patience to map out all eventualities as best as one can) . . . . I get the gist of what you talking about, but wonder if the nomenclature, although intriguing on a first read, might strike some as a bit “contrived” on a second.

My point is that it’s impossible to have any understanding of the risk associated with a factor, if you never think of that factor (despite the fact that you know factors exist which you haven’t though of).

Basically, if you can’t name a factor, it won’t go into your analysis. Just considering the general class of unknown unknowns when making a decision is not enough. There are likely specific, significant instances of that class which, by definition, you’re not thinking about. What forms the “difference in kind” is that you don’t know about them.

Consider this specific instance of an unknown unknown:

I decide to ban DDT in Africa. Suppose before I do so, I think over a few questions: How many birds/animals will be saved? How much will the ban cost industry? How hard will the ban be to enforce? These are known unknowns.

As a good risk-analyst, I understand that these questions do not fully describe the domain of possible consequences. I know that there exist unknown unknowns, things I haven’t thought of. Maybe some of these are bad. But hey, what more can I do? I impose the ban.

Then I find that millions begin to die of malaria, a consequence I hadn’t considered. That specific consequence, phrased as a question (e.g. “How might the ban affect rates of malaria?”) was an unknown unknown at the time of decision-making.

Why? Because the question wasn’t considered in analysis, despite the analyst’s abstract understanding of unthought-of consequences.

In theory, any unknown unknown can be transformed into a known unknown, but that would require infinite time. Literally — you’d have to think of everything.