So convenient a thing it is to be a reasonable creature, since it enables one to make a reason for everything one has a mind to do.” – Ben Franklin
In the 1930’s a researcher named Norman Maier  conducted a curious psychological experiment. He brought people into a room with two long ropes hanging from the ceiling, and instructed them find as many ways as possible to tie the ropes together. The ropes were separated from each other by just enough distance that you couldn’t simply grab one and walk to the other, but the room also contained a variety of objects like a length of clothesline and a long pole. Most people quickly discovered that they could tie the clothesline to one rope and walk to the other, or that they could use the pole to reach out and draw one of the ropes to them. Once they got through the obvious solutions, however, everyone was stumped.
After they had been confused for a while, Maier, who walked around the room throughout the experiment, would casually brush one of the ropes with his shoulder, causing it to sway. Within a minute, most people would then solve the puzzle by tying something heavy to one rope and swinging it like a pendulum to get to the other.
The curious bit came when Maier asked them where they got the idea for the swinging. Almost invariably, they would say something like “I just thought suddenly of a grandfather clock,” or “I remembered reading Tarzan as a child.” Not one of them mentioned Maier brushing the rope with his shoulder, even though they had the idea immediately after it started to sway.
These people had no reason to lie and indeed had no clue that they were. The idea came to them subconsciously and they honestly weren’t able to understand its origin. The tricky part, though, seems to be that we are hardwired to make up (and believe) plausible reasons for our behavior, even when we don’t know the real answer. Psychologists have a name for this combination of our inability to articulate our internal motivations with the tendency to invent reasons. It’s called the Perils of Introspection.
So what does this have to do with evaluating design solutions? Everything.
Every time a usability tester asks the user why they clicked the left button or a focus group facilitator asks a customer why they prefer red over blue, we have to be aware that the person quite likely doesn’t know and are almost certainly (though unintentionally) going to make something up. Even worse, researchers Wilson and Lisle have shown that if you’re asked to explain a choice before you make it, you might end up picking the option that’s easier to explain, not the one you would have chosen in the real world.
There are numerous examples of successful products (like the Aeron Chair) that almost didn’t happen due to bad reviews resulting from the perils of introspection. Malcolm Gladwell gives a good overview of the problem and some great examples in this talk from PopTech in 2004:
So now that we know there’s a problem, what do we do?
The solution can be found in one of the most important things we know from design research, which is that you cannot just talk to people but also must watch them in action. In research, we understand that what people say and what they do often don’t line up, and it’s in those inconsistencies that some of the most interesting insights are found.
So if apply this same understanding to evaluating designs, then the next time we have a great idea and start to pull together a roundtable focus group, we might think twice. We might instead remember the Perils of Introspection and devise a way to sketch or prototype the idea and watch how people interact with it.
Your participants will give you much better data if you observe their interactions. Just don’t ask them why.
 Story adapted from Malcolm Gladwell’s Blink. Maier’s 1931 paper: Reasoning in Humans. II. The solution of a problem and it’s appearance in consciousness.
Posted from Ryan’s personal blog, Back of the Envelope and Big Ideas