Well, I’m still obsessed with the ” ‘hot hand fallacy’ fallacy.” Are you?
As discussed previously, the classic “‘hot hand’ fallacy” studies purported to show that people are deluded when they perceive that basketball players and other athletes enjoy temporary “hot streaks” during which they display an above-average level of proficiency.
The premise of the studies was that ordinary people are prone to detect patterns and thus to confuse chance sequences of events (e.g., a consecutive string of successful dice rolls in craps) as evidence of some non-random process (e.g., a “hot streak,” in which a craps player can be expected to defy the odds for a specified period of time).
For sure, people are disposed to see signal in noise.
But the question is whether that cognitive bias truly accounts for the perception that athletes are on a “hot streak.”
The answer, according to an amazing paper by Joshua Miller & Adam Sanjurjo, is no.
Or in any case, they show that the purported proof of the “hot hand fallacy” itself reflects an alluring but false intuition about the the conditional independence of binary random events.
The “test” the “hot hand fallacy” researchers applied to determine whether a string of successes indicate a genuine “hot hand”–as opposed to the illusion associated with our over-active pattern-detection imaginations–was to examine how likely basketball players were to hit shots after some specified string of “hits” than they were to hit shots after an equivalent string of misses.
If the success rates for shots following strings of “hits” was not “significantly” different from the success rates for shots following strings of “failures,” then one could infer that the probability of hitting a shot after either a string of hits or misses was not significantly different from the probability of hitting a shot regardless of the outcome of previous shots. Strings of successful shots being no longer than what we should expect by chance in a random binary process, the “hot hand” could be dismissed as product of our vulnerability to see patterns where they ain’t, the researchers famously concluded.
This analytic strategy itself reflects a cognitive bias– an understanding about the relationship of independent events that is intuitively appealing but in fact incorrect.
Basically, the mistake — which for sure should now be called the ” ‘hot hand fallacy’ fallacy” — is to treat the conditional probability of success following a string of successes in a past sequence of outcomes as if it were the same as the conditional probability of success following a string of successes in a future or ongoing sequence. In the latter situation, the occurrence of independent events generated by a random process is (by definition) unconstrained by the past. But in the former situation — where one is examining a past sequence of such events — that’s not so.
In the completed past sequence, there is a fixed number of each outcome. If we are talking about successful shots by a basketball player, then in a season’s worth of shots, he or she will have made a specifiable number of “hits” and “misses.”
Accordingly, if we examine the sequence of shots after the fact, the probability the next shot in the sequence will be a “hit” will be lower immediately following a specified number of “hits” for the simple reason that the proportion of “hits” in the remainder of the sequence will necessarily be lower than it it was before the previous successful shot or shots.
By the same token, if we observe a string of “misses,” the proportion of “misses” in the remainder will be lower than it had been before the first shot in the string. As a result, following a string of “misses,” we can deduce that the probability has now gone up that the next shot in the sequence will turn out to have been a “hit.”
Thus, it is wrong to expect that, on average, when we examine a past sequence of random binary outcomes, P(success|specified string of successes) will be equal to P(success|specified string of failures). Instead, in that that situation, we should expect P(success|specified string of successes) to be less than P(success|specified string of failures).
That means the original finding of the “hot hand fallacy” researchers that P(success|specified string of successes) = P(success|specified string of failures) in their samples of basketball player performances wasn’t evidence that the “hot hand” perception is an illusion. If P(success|specified string of successes) = P(success|specified string of failures) within an adequate sample of sequences, then we are observing a higher success rate following a string of successes than we would expect to see by chance.
In other words, the data reported by the original “hot hand fallacy” studies supported the inference that there was a hot-hand effect after all!
So goes M&S’s extremely compelling proof, which I discussed in a previous blog. The M&S paper was featured in Andrew Gelman’s Statistical Modeling, Causal Inference blog, where the comment thread quickly frayed and broke, resulting in a state of total mayhem and bedlam!
How did the “hot hand fallacy” researchers make this error? Why did it go undetected for 30 yrs, during which the studies they did have been celebrated as classics in the study of “bounded rationality”? Why do so many smart people find it so hard now to accept that those studies themselves rest on a mistaken understanding of the logical properties of random processes?
The answer I’d give for all of these questions is the priority of affective perception to logical inference.
Basically, we see valid inferences before we apprehend, through ratiocination, the logical cogency of the inference.
What makes people who are good at drawing valid inferences good at that is that they more quickly and reliably perceive or feel the right answer — or feel the wrongness of a seemingly correct but wrong one — than those less adapt at such inferences.
This is an implication of a conception of dual process reasoning that, in contrast to the dominant “System 1/System 2” one, sees unconscious reasoning and conscious effortful reasoning as integrated and reciprocal rather than discrete and hierarchical.
The “discrete & hierarchical” position imagines that people immediately form a a heuristic response (“System 1”) and then, if they are good reasoners, use conscious, effortful processing (“System 2”) to “check” and if necessary revise that judgment.
The “integrated and reciprocal” position, in contrast, says that good reasoners experience are more likely to experience an unconscious feeling of the incorrectness of a wrong answer, and the need for effortful processing to determine the right answer, than are people who are poor reasoners.
The reason the former are more likely to feel that right answers are right and wrong answers wrong is that they have through the use of their proficiency in conscious, effortful information processing trained their intuitions to alert them to the features of a problem that require the deployment of conscious, effortful processing.
Now what makes the fallacy inherent in the ” ‘hot hand fallacy’ fallacy” so hard to detect, I surmise, is that those who’ve acquired reliable feelings about the wrongness of treating independent random events as dependent (the most conspicuous instance of this is the “gambler’s fallacy”) will in fact have trained their intuitions to recognize as right the corrective method of analyzing such events as genuinely independent.
If the “hot hand” perception is an illusion, then it definitely stems from mistaking an independent random process for one that is generating systematically interdependent results.
So fix it — by applying a test that treats those same events as independent!
That’s the intuition that the “hot hand fallacy” researchers had, and that 1000’s & 1000’s of other smart people have shared in celebrating their studies for three decades — but it’s wrong wrong wrong wrong wrong!!!!!
But because it feels right right right right right to those who’ve trained their intuitions to avoid heuristic biases involving the treatment of independent events as interdependent, it is super hard for them to accept that the method reflected in the “hot hand fallacy” studies is indeed incorrect.
So how does one fix that problem?
Well, no amount of logical argument will work! One must simply see that the right result is right first; only then will one be open to working out the logic that supports what one is seeing.
And at that point, one has initiated the process that will eventually (probably not in too long a time!) recalibrate one’s reciprocal and integrated dual-process reasoning apparatus so as to purge it of the heuristic bias that concealed the ” ‘hot hand fallacy’ fallacy” from view for so long!
BTW, this is an account that draws on the brilliant exposition of the “integrated and reciprocal” dual process reasoning offered by Howard Margolis.
For Margolis, reason giving is not what it appears: a recitation of the logical operations that make an inference valid.
Rather it is a process of engaging another reasoner’s affective perception, so that he or she sees why a result is correct, at which point the “reason why” can be conjured through conscious processing. (The “Legal Realist” scholar Karl Llewellyn gave the same account of legal arguments, btw.)
To me, the way in which the ” ‘hot hand fallacy’ fallacy” fits Margolis’s account — and also Ellen Peters’s of the sorts of heuristic biases that only those high in Numeracy are likely to be vulnerable too– is what makes the M&S paper so darn compelling!
If you, like me and 10^6s of others, are still having trouble believing that the analytic strategy of the original “hot hand” studies was wrong, here are some gadgets that I hope will enable you, if you play with them, to see that M&S are in fact right. Because once you see that, you’ll have vanquished the intuition that bars the path to your conscious, logical apprehension of why they are right. At which point, the rewiring of your brain to assimilate M&S’s insight, and avoid the “‘hot hand fallacy’ fallacy” can begin!
Indeed, in my last post, I offered an argument that was in the nature of helping you to imagine or see why the ” ‘hot hand fallacy’ fallacy” is wrong.
But here–available exclusively to the 14 billion regular subscribers to this blog (don’t share it w/ nonsubscribers; make them bear the cost of not being as smart as you are about how to use your spare time!)– are a couple of cool gadgets that can help you see the point if you haven’t already.
Gadget 1 is the “Miller-Sanjurjo Machine” (MSM). MSM is an excel sheet that random generates a sequence of 100 coin tosses. It also keeps track of how each successive toss changes the probability that the next toss in the sequence will be a “heads.” By examining how that probability goes up & down in relation to strings of “heads” and “tails,” one can see why it is wrong to simply expect P(H|any specified string of Hs) – P(T|any specified string of Ts) to be zero.
MSM also keeps track of how many times “heads” occcurs after three previous “heads” and how many times “heads” occurs after three previous “tails.” If you keep doing tosses, you’ll see that most of the time P(H|HHH)-P(H|TTT) < 0.
Or you’ll likely think you see that.
Because you have appropriately trained yourself to feel something isn’t quite right about that way of proceeding, you’ll very sensibly wonder if what you are seeing is real or just a reflection of the tendency of you as a human (assuming you are; apologies to our robot, animal, and space alien readers) to see pattern signals in noise.
Hence, Gadget 2: the “Miller-Sanjurjo Turing Machine” (MSTM)!
MSTM is not really a “Turing machine” (& I’m conflating “Turing machine” with “Turing test”)– but who cares? It’s a cool name for what is actually just a simple statisical simulation that does 1,000 times what it’s baby sister MSM does only once — that is, flip 100 coins and tabluate the P(H|HHH) & P(H|TTT).
MSTM then reports the average difference between the two. That way you can see in fact it’s true that P(H|HHH) – P(H|TTT) for sure should be expected to be < 0.
Indeed, you can see exactly how much less than 0 we should expect P(H|HHH) – P(H|TTT) to be: about 8%. That amount is the bias that was built into the original “hot hand” studies against finding a “hot hand.”
(Actually, as M&S explain, the size of the bias could be more or less than that depending on the length of the sequences of shots one includes in the sample and the number of previous “hits” one treats as the threshold for a potential “hot streak”.)
MSTM is written to operate in Stata. But if you don’t have Stata, you can look at the code (opening the file as a .txt document) & likely get how it works & come up with an equivalent program to run on some other application.
Have fun seeing, ratiocinating, and rewiring [all in that order!] your affective perception of valid inferences!