THE ROLE OF COGNITIVE ERRORS IN THE DRUG POLICY DEBATE
Advocates of drug policy reform frequently find themselves frustrated by their inability to convince prohibition minded individuals of the error of their ways. This difficulty is due in large part to two well studied cognitive errors, 'illusory correlation' and 'belief perserveration'. Illusory correlation occurs when two features or characteristics of a situation (happen to) occur together (e.g. marijuana smoking and dropping out of school) without necessarily being causally related. Belief perseveration refers to the frequent tendency to resist the force of evidence contrary to one's beliefs. The major lesson to be drawn from an examination of these cognitive errors is that drug reform advocates should not waste their time arguing with confirmed prohibitionists. Rhetorical efforts should be reserved for the truly undecided. Research is needed to identify former prohibitionists who have changed their minds, and to study the reasons behind these conversions.
People often do not think clearly. So much is well known to everyone, including drug policy reform advocates who frequently confront the muddled thinking so often characteristic of prohibition advocates. What is less well known is that an entire body of scientific literature has accumulated concerning the 'cognitive errors' that lead to the development and maintenance of erroneous judgements and beliefs. Knowledge of this literature is vital to an understanding of how otherwise (seemingly) rational people can so persistently resist the force of evidence, and for developing strategies to increase one's persuasive power.
Among the 20 or so categories of cognitive error identified so far, two are of particular relevance to the drug policy debate: 'illusory correlation' and 'belief perseveration'. The joint action of these errors explains much of the observed intransigence of prohibitionist advocates in the face of evidence. Such intransigence is generally ascribed simply to ignorance or malevolence. But this is not the whole story.
Among the key processes underlying belief formation is our innate tendency to notice (apparent) correlations between two or more factors or phenomena. Thus, we may come to believe that cannabis smoking is correlated with dropping out of school because we observe (or learn about) some pot-smoking teenagers who drop out of school.
Once correlations between attributes or events are noticed, we often go on to develop causal explanations for these correlations. The conceptual difficulties inherent in developing causal explanations are themselves the subject of a vast literature and are eminently relevant to the present debate. However, we restrict our attention here (for the most part) to the process by which simple correlations are inferred, irrespective of any subsequent causal attributions.
As it turns out, accurate assessment of correlation is difficult enough, and 'illusory correlation' is a ubiquitous problem. Defined as 'the tendency to see two things as occurring together more often than they actually do' (Chapman and Chapman, 1982), illusory correlation was first studied in the early 1960s in the setting of word association tests, in which series of strategically designed pairs of words were briefly presented to test subjects (Chapman, 1967). Invariably, subjects would report that related words (e.g, lion, tiger) appeared together much more often than they actually did.
This work was later extended to real-world applications, including several studies concerning psychologists' interpretation of projective tests. For example, based on a few paranoid people who drew large eyes during development of the 'Draw a Person' test, psychologists believed for many years that drawing large eyes was correlated with (and perhaps caused by) paranoia (Chapman and Chapman, 1969).
Illusory correlation occurs frequently because, despite its apparent simplicity, the process of judging which things are inter-correlated is fraught with hazard. In general, people tend to make far more errors than correct judgements of correlation, especially when- as is often the case -their expectations, hopes and prior theories interfere with objective coding and processing of data.
Even absent such biases, however, the process of judging whether Attribute A (e.g. cannabis smoking) is 'really' correlated with Attribute B (e.g., dropping out of school) is a challenging one. Successful judgements require construction (if only subconsciously) of '2 x 2 tables' for each possible combination of attributes - the four cells corresponding to:
1. Attribute A present, Attribute B present;
2. Attribute A present, Attribute B absent;
3. Attribute A absent, Attribute B present;
4. Attribute A absent, Attribute B absent.
For example, data in Cell (1) might correspond to people who smoked cannabis (Attribute A) and later drop out of school (Attribute B); data in Cell (4) would then correspond to people who did not smoke cannabis and who did not drop out of school. The major factor responsible for illusory correlation is the strong tendency to focus almost exclusively on the cases found in Cell (1) (present, present) in the above schema. Focusing on these cases, which are usually more visible and impressive than cases in the other three cells, is often worse than useless in terms of judging correlation. So too is considering even two or three of the four cells to be adequate - all four cells are essential.
For example, as discussed by Jennings et al. (1982), if asked to test the theory that red-haired individuals are hot-tempered, most people would attempt (simply) to recall the red-haired individuals they have known and try to remember how many of those individuals were hot tempered - cells (1) and (2). More sophisticated 'intuitive psychologists' would attempt to recall the hot tempered people they have known and determine how many of these had red hair. But it would occur to very few people that the proportion of even-tempered blondes and brunettes is essential to the task at hand.
Illusory correlation also underlies much of the prejudice manifested throughout the world. ('That guy's a thief! Say, isn't he Jewish?') By focusing on 'present present' cases (e.g. cannabis users who drop out of school, Jewish thieves) and ignoring the other three categories of people (e.g. cannabis users who do not drop out of school, non-Jewish thieves), we get ourselves - and others - into all kinds of trouble.
Once beliefs are formed, the second major kind of cognitive error relevant to the drug policy debate comes into play: belief perseveration. As described by Ross and Anderson:
'It appears that beliefs - from relatively narrow personal impressions to broader social theories - are remarkably resilient in the face of empirical challenges that seem logically devastating. Two paradigms illustrate this resilience. The first involves the capacity of belief to survive and even be strengthened by new data, which, from a normative standpoint, should lead to the moderation of such beliefs. The second involves the survival of beliefs after their original evidential bases have been negated.' (1982)
Drug policy reform advocates will immediately recognise these phenomena in their opponents. Prohibition-minded individuals seem remarkably resistant to alterations in their belief when confronted with contrary evidence. (It would he well, of course, to search for the presence of this problem also in ourselves, but with [almost] all the evidence on our side, we can be 'relatively' assured that our beliefs are not persisting in spite of evidence.)
A study conducted by Lord et al. ( 1979) illustrates the kind of research used to analyse the problem of belief perseveration. These investigators identified people with clear views (one way or the other) concerning the effectiveness of capital punishment as a deterrent to crime. In a counterbalanced design, subjects were presented with purportedly authentic empirical studies which either supported or refuted their position. Subjects consistently rated the studies supporting their position as 'more convincing' and 'better conducted' than the studies opposing their beliefs.
'In fact, many individual subjects who had read both the results summary and the procedural details of the study that opposed their belief ultimately became more convinced of the correctness of that belief! No such effects occurred when the same results and procedures were read by subjects whose initial views were supported.'
An even more serious challenge to one's beliefs than the introduction of new evidence is the revelation that the original bases for the beliefs were completely spurious. Even under these circumstances, beliefs often persist. Several studies have been conducted to explore this phenomenon, among the most illuminating of which is a study by Anderson et al. 0 980), in which subjects were informed that a functional relationship existed between how well fire-fighters perform in their job and their scores on a test of risk preference (i.e. risk avoiding versus risk seeking). Subjects were provided with scenarios in which the job performance of certain firefighters was presented along with their scores on the test. The investigators found that presenting even a single pair of cases (i.e. one successful and one unsuccessful fire-fighter with appropriately discrepant scores) was sufficient for subjects to develop beliefs about the functional relationship of performance and test scores. Moreover, these beliefs
'. . survived the revelation that the cases in question had been totally fictitious -and the different subjects had, in fact, received opposite pairings of riskiness scores and job outcomes. Indeed, when comparisons were made between subjects who had been debriefed and those who had not been, it appeared that over 500,'o of the initial effect of the "case history" information remained after debriefing.' (Anderson etal., 1980)
Ross and Anderson explore a variety of cognitive mechanisms which might underlie the unwarranted persistence of our beliefs and social theories, including the following: biased search, recollection and assimilation of information; erroneous formation of, causal explanations; behavioural confirmation; and self fulfilling' hypotheses, among others (Ross and Anderson, 1982). All of these mechanisms have been well studied and described, and none is subject to ready correction. The collective power of these errors is formidable.
The authors conclude that 'attributional biases and other inferential shortcomings are apt not to be corrected but instead to be compounded by subsequent experience and deliberations'. This is a discouraging situation for those who wish to change the minds of others by reference to evidence.
Note that illusory correlation and belief perseveration are mutually reinforcing phenomena. Once an apparent correlation is observed, people are remarkably adept at developing theories about why the association was observed. In the case of cannabis smoking and dropping out of school, for example, the likely hypothesis would be that cannabis produces an' amotivational syndrome' or in some way impairs one's ability to function in school. This hypothesis might seem reasonable to many people (especially given the influence of government propaganda and it is therefore added to one's repertoire of beliefs.
The belief then perseverates despite the discrediting of its original evidential bases because, well, the hypothesis still seems reasonable.
Drug policy reform advocates must understand and appreciate the power of illusory correlation and belief perseveration if they are to tailor and focus their persuasive efforts to do the most good. What might this entail?
Most evidently, perhaps, the phenomenon of belief perseveration would tend to indicate that the goal of the debate is 'not' (primarily) to change the minds of prohibitionists, but rather to influence those who are truly undecided. Arguing with prohibitionists is often likely to be a waste of time.
On the other hand, beliefs sometimes do change. Ross and Anderson note that 'even if logical or empirical challenges have less impact than might be warranted by normative standards, they may still get the job done' ( 1982). A particularly effective method for changing people's beliefs, as noted by these investigators, is the presentation of 'vivid, concrete, first-hand experience'(1 982). In the drug policy arena, such experiences might come, for example, through the revelation that a respected friend, family member, or colleague uses cannabis (or other drug), or when such a person - who is blameless - is arrested and jailed for use of drugs. Revelatory experiences often occur in certain settings of powerful emotional appeal.
In this regard, Ross and Anderson note that 'the effectiveness of groups and leaders that accomplish I dramatic political or religious conversions offer inviting targets of future research' ( 1982). The extent to which such a proselytising approach works, or has worked, in the drug policy debate is largely unknown.
This question, and others, should be addressed as part of a study of the 'epidemiology of belief revision' with respect to drug policy. Surprisingly little (if anything) is known about how often people change their minds, on way or the other, on the subject of drug policy, let alone 'why' people change their minds. This subject would thus seem to be a potentially fruitful avenue for research.
One possible approach to such a study would be to identify people who previously held prohibitionist views but who have been persuaded by one thing or another to abandon those views. Identifying such people would not be easy, however. Recruiting people through Internet newsgroups is one possible approach, although such a sample would obviously be regarded as a convenience, rather than representative.
Another implication of the foregoing analysis of cognitive errors with respect to developing a strategy of argument in the drug policy debate is that drug policy reform advocates should cite the existence and nature of these errors as recurring themes in their arguments. Real-life examples to illustrate this common theme are not hard to come by, and indeed a critical part of this strategy would entail being alert for prohibitionist arguments based on 'Cell 1 myopia'. For example, in response to the claim that using cannabis leads children to drop out of school, one might ask 'What percentage of kids who smoke pot "don't" drop out of school? Are you sure this figure isn't even higher than those who never touch the stuff? And what proportion of kids who don't smoke pot drop out? You don't know? Well, without knowing the answers to these questions it is "impossible" to say anything about the relationship, if any, between smoking pot and dropping out!'
Or words to that effect. This line of attack (or defence) is likely to be particularly effective in cases, such as the one just alluded to, where empirical data are not readily available to support or refute a particular position. Where such facts do exist, bringing them to light will often be more effective than pointing out the cognitive errors inherent in the prohibitionists' arguments. Ideally, one would like to do battle on both fronts, but this will not always be feasible. In most cases, judgement will be required to determine which approach is most likely to change one's opponent's mind.
In summary, the role of cognitive errors in the drug policy debate implies that reform advocates should strictly limit the time and effort they devote to (usually vain) efforts to convince devoted prohibitionists that they are in error. A more productive strategy is to search for the truly undecided and focus one's attention on them.
Research is needed into how frequently prohibitionists reverse their positions, and what causes them to do so. In the meantime, pointing explicitly to the existence and role of cognitive errors in the drug policy debate is probably a worthwhile tactic, and certainly all cases of 'Cell I myopia' should be ruthlessly exposed and debunked.
David Hadorn, MD
Anderson CA, Lepper MR, Ross L (1980). The perseverance of social theories. The role of explanation in the persistence of discredited information. Journal of Personality and Social Psychology 39: t0371049.
Chapman LJ ( 1967). Illusory correlation in observational report. Journal of Verbal Learning and Verbal Behavior 6:151-155.
Chapman LJ, Chapman JP (1969). Illusory correlation as an obstacle to the use of valid psychodiagnostic signs. Journal of Abnormal Psychology 74: 271-280.
Chapman LJ, Chapman J (1982). Test results are what you think they are. In Kahnemann, Slovic, Tversky (Eds),judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press, 1982.
Jennings DL, Amabile TM, Ross L (1982). Informal covariat ion assessment: data-based versus theory-based judgements. In Kahnemann, Slovic, Tversky (Eds),judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.
Lord C, Lepper MR, Ross L (1979). Biased assimilation and attitude polarization. The effects of prior theories on subsequently considered evidence.Journal of Personality andSocial Psychology 37: 2098-2110.
Ross L, Anderson CA (1982). Shortcoming in the attribution process: on the origins and maintenance of erroneous social assessments. In Kahnemann, Slovic, Tversky (Eds), Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.