When we speak of an inference, we usually imply a leap – a jump from one point to another. Between these points is a gap, and the inference is the bridge between the gap. In typical forms of (inductive) logic, the gap is between fact and observation:
[a fact:] All meowing cats are hungry.
[an observation:] My cat is meowing.
[an inference:] Therefore, it is hungry.
But we mean another kind of leap when we speak of inference in innovation, and we commonly refer to this type of inference as being less logical: as an emotional, intuitive, or feeling-based inference. Consider:
[a fact:] Cats sometimes meow when they are hungry.
[a fact:] Cats sometimes meow when they are hurt.
[a fact:] Cats sometimes meow when they are happy.
[a fact:] It’s 6am.
[a fact:] I usually feed my cats at 6am.
[an observation:] My cat is meowing.
These are three potential valid inferences:
[an inference:] My cat is meowing because it is hungry.
[an inference:] My cat is meowing because it is happy.
[an inference:] My cat is meowing because it is hurt.
All three inferences are valid, because they follow from the facts. But one is much more likely than the others to be true. This is the bread and butter of a design-based argument for an innovative new idea: the ability to identify a valid inference based on constraints, and more importantly, to then treat the valid as fact – and build upon it. This is abductive logic, coined and described by Charles Peirce as a form of “Retroduction.” As he describes, and I quote in full:
“The inquiry begins with pondering these phenomena in all their aspects, in the search of some point of view whence the wonder shall be resolved. At length a conjecture arises that furnishes a possible Explanation, by which I mean a syllogism exhibiting the surprising fact as necessarily consequent upon the circumstances of its occurrence together with the truth of the credible conjecture, as premisses. On account of this Explanation, the inquirer is led to regard his conjecture, or hypothesis, with favour. As I phrase it, he provisionally holds it to be “Plausible”; this acceptance ranges in different cases — and reasonably so — from a mere expression of it in the interrogative mood, as a question meriting attention and reply, up through all appraisals of Plausibility, to uncontrollable inclination to believe. The whole series of mental performances between the notice of the wonderful phenomenon and the acceptance of the hypothesis, during which the usually docile understanding seems to hold the bit between its teeth and to have us at its mercy — the search for pertinent circumstances and the laying hold of them, sometimes without our cognisance, the scrutiny of them, the dark labouring, the bursting out of the startling conjecture, the remarking of its smooth fitting to the anomaly, as it is turned back and forth like a key in a lock, and the final estimation of its Plausibility, I reckon as composing the First Stage of Inquiry. Its characteristic formula of reasoning I term Retroduction, i.e. reasoning from consequent to antecedent. In one respect the designation seems inappropriate; for in most instances where conjecture mounts the high peaks of Plausibility — and is really most worthy of confidence — the inquirer is unable definitely to formulate just what the explained wonder is; or can only do so in the light of the hypothesis. In short, it is a form of Argument rather than of Argumentation.
Retroduction does not afford security. The hypothesis must be tested.
This testing, to be logically valid, must honestly start, not as Retroduction starts, with scrutiny of the phenomena, but with examination of the hypothesis, and a muster of all sorts of conditional experiential consequences which would follow from its truth. This constitutes the Second Stage of Inquiry. For its characteristic form of reasoning our language has, for two centuries, been happily provided with the name Deduction.”
Built into an abduction is the possibility of error – a risk of making a leap that is valid, but false. I could, based on my assessment of the situation, rush my cat to the vet only to find out she’s hungry. More likely, and more unfortunately, I might feed my cat and never take her to the vet, only to find out something’s wrong. The signals I receive help me build an assessment of the situation. When I act on that assessment, there is risk, and when I’m wrong, there are consequences. These ideas of inference, risk, and consequences are at the heart of innovation. An innovation is, simplistically, the result of abductive reasoning; I wrote about this extensively in my second book.
But I don’t take my cat to the vet every morning. That’s because, while my cat may be meowing because something’s wrong, and there will be terrible consequences if I fail to take her to the vet and something’s wrong, the practical burden of acting as if all inferences are true is too large. What’s more, there are other signals at play that help me build a stronger case, intuitively and automatically, that the cat is simply hungry: she walks around, and makes eye contact, and there have been no other visible indications of problems over the last few days, and her coat is shiny, and so-on. Peirce’s word choice was purposeful (my emphasis added): “the search for pertinent circumstances and the laying hold of them, sometimes without our cognisance, the scrutiny of them, the dark labouring, the bursting out of the startling conjecture, the remarking of its smooth fitting to the anomaly, as it is turned back and forth like a key in a lock, and the final estimation of its Plausibility.” The process of identifying the abductive insight is long, winding, thoughtful, reflective, and murky.
My reflection on inference this morning is based on the thread between Sam Harris and Bruce Schneier on airport security. (If you missed it, start here, then read this, and finally, this)
Profiling is a form of abductive reasoning, and in airports, it’s based on facts, observations, and inferences like this:
[a fact:] Some terrorists are Muslims
[a fact:] Some Muslims wear unique forms of clothing, like the Hijab
[a fact:] Some Muslims have dark skin
[an observation:] That person, who has dark skin and is about to enter airport security, is wearing a Hijab.
[an inference:] That person is going to visit her relatives
[an inference:] That person is going to blow up a plane
Like the example of my cat meowing, both inferences could be true. And like the example of my cat meowing, one inference is much more likely to be true. And, like the example of my cat meowing, the consequences of failing to act based on one of the inferences could be catastrophic. And, like the example of my cat meowing, we shouldn’t act on it simply based on the potential for a catastrophe.
Harris claims that “We should profile Muslims, or anyone who looks like he or she could conceivably be Muslim, and we should be honest about it. And, again, I wouldn’t put someone who looks like me entirely outside the bull’s-eye… But there are people who do not stand a chance of being jihadists, and TSA screeners can know this at a glance… At a minimum, wouldn’t they want a system that anti-profiles—applying the minimum of attention to people who obviously pose no threat?”
Harris is actually making two claims at once:
- We should profile people who “look Muslim”
- We should profile people who “do not look Muslim”
Both are complex ideas; he never unpacks what it means to “look Muslim” (And his assumptions are probably wrong). But while that’s a flaw in his argument, I have two larger problems with his jump to profiling, based on how I understand inference-based abductive reasoning to work.
First, I don’t think you can perform an abductive leap as quickly as would be necessary, in the context of an airport, to spot a terrorist. Abductive reasoning is based on multiple signals coming together in new and interesting ways, and it takes time for the brain to process those signals. Based on my own experiences, I would argue that the “bursting out of startling conjecture” that Peirce describes can’t actually happen without sleep, where neural pathways are reconfigured and rearranged. I think it’s physiologically impossible to make the necessary leaps to do this work.
But more importantly, abductive reasoning is inherently a risk-based process, where the risk is that you may make an incorrect inference, and that incorrect inference has consequences. The negative consequences of false-profiling Harris describes are nearly all social, but are enormous. They will further alienate a culture that is already vehemently angry with the western way of life and culture, and these activities will likely produce more of the behavior they are intended to stop. Because each time the leap is wrong (which will be nearly all of the time), the consequences will be reified.
We don’t have a strong understanding of how this type of inference works. We’re starting to understand more about the plasticity of the brain and the way snap judgments work, but we’re years away from truly knowing how new knowledge is produced. Consider the crap-shoot that is new product development, and our collective lack of ability to introduce innovations into the world with any repeatable form of success. I’m highly skeptical of building national policy on top of such a way of thinking.