Perhaps you’ve read about this experiment from the 1980s: Subjects were told, “Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.” Then they were asked to estimate which further statement was more probably true about Linda, that she “is a bank teller” or that she “is a bank teller and is active in the feminist movement.” Which do you think is more likely?
Daniel Kahneman, in his recent bestseller Thinking: Fast and Slow, explains to us that “the set of feminist bank tellers is wholly included in the set of bank tellers, as every feminist bank teller is a bank teller. Therefore the probability that Linda is a feminist bank teller must be lower than the probability of her being a bank teller.” And yet 85-90% of college undergrads said it was more probable that Linda was a feminist bank teller. Kahneman calls this the “conjunction fallacy.”
I confess that I’m one of those 10-15% who, when posed this kind of question, will automatically think about the set of bank tellers’ wholly including the set of feminist bank tellers. I will also naturally assume that the experimenters are trying to trip me up by fooling my intuitions. This is, after all, what school teachers and college professors routinely do when making up tests. And I was always very good at taking tests. However, the fact that I know the logician’s answer to this question does not mean that I think it is the only correct answer. Or that those who don’t choose it are committing a “fallacy.”
Those who think it’s less likely that Linda is a bank teller than that she is a feminist bank teller are clearly making some assumptions that differ from Kahneman’s. And I think those differing assumptions probably have to do with trying to understand this question as a real-world scenario, not as Kahneman frames it, in a world of abstract logic. For instance, some people, when they hear this question, might think about it as if they were looking at a police line-up. They have some facts about the person they’re trying to locate in the line-up (that’s the original description of Linda), and then, about each person in the line-up, they have a very brief description, presumably giving the most salient available facts about that person. So, about one person in the line-up, we simply know that she’s a bank teller. About another, we know that she is a feminist bank teller. I think that, when the question is framed in this way, we would all choose the feminist bank teller as more likely to be Linda.
We could go into a lot more detail examining real-world counterparts to this abstract probability question, but my point is this: I think the Linda experiment, along with many others reported by Kahneman, is less a test of whether people reason “correctly” and more a test of how they frame questions that are posed to them abstractly. Some people frame them in exactly the abstract, logical way that Kahneman regards as correct, and others frame them in ways more similar to how they would be encountered in real life–and give the answer that, in real life, is more likely to be correct. I have my suspicions that people who think in Kahneman’s abstract, strictly logical way are quite good at tests and get wonderful grades in school but that the folks who frame questions in real-world terms may have the advantage in the real world.
In any case, I think it should be better recognized that there’s a lot of room for interpretation when it comes to the results of experiments in psychology. Just because people don’t answer questions the way a logician would doesn’t mean their way of thinking is erroneous. In fact, it might work much better in real-world circumstances.
That’s one reason that I am a big fan of the work of Gerd Gigerenzer. I just learned from my friend Pete Hulme of the blog Everybody Means Something that Gigerenzer has a new book out: Risk Savvy: How to Make Good Decisions. My guess is that it’s every bit as good as his previous one, Gut Feelings: The Intelligence of the Unconscious. In any case, I was very heartened to read a review in The Guardian that contained this lovely passage:
[O]ne driving motivation behind Gigerenzer’s work is to show…that we are not idiots, chronically misled by our instincts. In fact, he argues, we would handle risk far better if we knew when to trust our guts more, and when to spurn expert advice in favour of simple rules of thumb.
“The error my dear colleagues make,” Gigerenzer says, is that they begin from the assumption that various “rational” approaches to decision-making must be the most effective ones. Then, when they discover that is not how people operate, they define that as making a mistake: “When they find that we judge differently, they blame us, instead of their models!”