No Credit Check Payday Loans



JoomlaWatch Agent

Visitors hit counter, stats, email report, location on a map, SEO for Joomla, Wordpress, Drupal, Magento and Prestashop

JoomlaWatch Users

JoomlaWatch Visitors

54% United States  United States
11.2% United Kingdom  United Kingdom
5.9% Australia  Australia
5.6% Canada  Canada
3.3% Philippines  Philippines
2.2% Kuwait  Kuwait
2.1% India  India
1.6% Germany  Germany
1.5% Netherlands  Netherlands
1.1% France  France

Today: 156
Yesterday: 310
This Week: 1519
Last Week: 2303
This Month: 5331
Last Month: 5638
Total: 24096

Bibliographic Essay PDF Print E-mail
User Rating: / 0
Books - Heroin and Politicians
Written by David Bellis   
Wednesday, 07 November 2012 00:00


A thoroughgoing review of the literature on heroinism reveals barely one or two investigations by political scientists of government-supported addiction control programs. Studies by sociologists, economists, criminologists, medical researchers and others abound, but many unexplored issues of heroin policy remain to be categorized and investigated according to current approaches in political science. In fact, political science has neglected heroin addiction and drug abuse in general.


In recent years public policy (roughly, whatever governments choose to do or not to do, and what difference it makes) has been treated analytically both as a dependent variable and, less often, as an independent variable. Two broad schools of policy research have distinguished themselves primarily on the basis of which strategy is employed: the "policymaking" school and the "policy impact" school.

The Policymaking School and Treatment Policy Formulation

Emmete S. Redford admonished each political scientist in "Reflections on a Discipline," The American Political Science Review 55: 761-72 to ". . . grapple with a policy problem." In that vein, many political scientists view public policy as the major dependent variable to explain in political life. In their view, the task of professional political science is to find and explain the independent and intervening variables that account for policy differences. A number of researchers have tried to describe policy and explain how it is made at both national and subnational levels. Three aggressive attempts are Richard Dawson and James Robinson, "Inter-Party Competition, Economic Variables, and Welfare Policies in the American States," Journal of Politics 25: 266-79; Lewis Froman, Jr., People and Politics: An Analysis of the American Political System (Englewood Cliffs, N.J.: Prentice-Hall, 1962); and Richard Hofferbert, The Study of Public Policy (Indianapolis, Ind.: Bobbs-Merrill, 1974).

Indeed, most policy research since World War II has sought to explain the processes of policy formulation—how public choices are made—rather than policy impacts or outcomes. These "process" studies have sought to understand and explain why governments choose some policies over others. Policymaking is viewed as a process of choosing.

Robert A. Dahl suggests that a policy decision is a set of actions related to and including the choice of one alternative over another. This policy-making approach taken in his article, "The Analysis of Influence in Local Communities," in Social Science and Community Action, edited by Charles Adrian (East Lansing, Mich.: Michigan State University Press, 1960), takes the decision and the events surrounding it as the basic and stable unit of analysis fundamental to all political life.

Among the leading proponents of the policymaking school is Harold D. Lasswell. In The Decision Process: Seven Categories of Functional Analysis (College Park, Md.: University of Maryland, Bureau of Governmental Research, 1956), he laid a foundation for analyzing the policymaking process. To him, decisionmaking consists of a series of phases, including intelligence, recommending, prescribing, invoking, policy application and finally, policy appraisal. Note that Lasswell's steps in the act of decision-making deal mostly with policymaking rather than policy evaluation. He pointed out that most political science publications deal with ". . . the assessment of . . . specific factors that influence the outcome of official acts." (p. 25)

Richard C. Snyder's "A Decision-Making Approach to the Study of Politics," in Approaches to the Study of Politics, edited by Roland Young (Evanston, Ill.: Northwestern University Press, 1958), like Lasswell, posits the process by which government decisions are made. Snyder broadly classifies this process as one involving a sequence of: (1) predecisional activities, (2) choice, and (3) implementation. Similarly, Austin Ranney's book Political Science and Public Policy (Chicago: Markham Publishing Company, 1968) focuses on policy as the dependent variable to be explained by political science. Ranney's definition of the policymaking process in Political Science and Public Policy does not speak to the question of policy impact; it ends with ". . . an implementation of intent." (p. 7)

Continuing with the policymaking approach, Charles 0. Jones's book, An Introduction to the Study of Public Policy (Belmont, Calif.: Wadsworth Publishing Company, 1970) takes the processes of public policymaking as the independent variables and policy itself as the dependent variable to be explained. Although Thomas Dye urges ". . . an evaluation of the impact of public policy on society . . ." in his work Understanding Public Policy, 3rd ed. (Englewood Cliffs, N.J.: Prentice-Hall, 1978) he, too, concentrates primarily on the analytic schemata used to make sense of the policymaking process.

Richard Hofferbert's The Study of Public Policy (Indianapolis, Ind.: Bobbs-Merrill, 1974) is another invitation to a full-course banquet of public policy contents and impacts where only stale process appetizers are served. Hofferbert writes, "To focus upon policy as the subject of study is to focus on the product side of the political equation." (p. 7) He says, ". . . we are primarily interested in what comes out of the process." (p. 7) It turns out that his real interest is in what comes out (policy) as effect, not cause.

The title of James E. Anderson's book accurately reflects its content: Public Policy-Making (New York: Praeger Publishers, 1975). This is another book about policymaking, not a book about the study of policy impacts or policy evaluation. Like Dye, Anderson is aware of the opportunity to analyze public policy as cause of changes in the environment. He acknowledges that ". . . systematic evaluation of policy . . . has clearly become a more widespread and potentially significant part of the policy process." (p. 153) However, Anderson is all but mesmerized by the ". . . number of barriers or obstacles that may create problems for the evaluation of policy." (p. 138) These barriers persuade him that "It is, of course, often impossible to measure quantitatively the impact of public policies, especially social policies, with any real precision." (p. 138)

Ira Sharkansky's and Donald Van Meter's Policy and Politics in American Governments (New York: McGraw-Hill Book Company, 1975) is yet another public policy analysis text with a process "fix." Although the authors adopt the Eastonian systems framework of inputs, conversion, outputs and outcomes (David Easton, "An Approach to the Analysis of Political Systems," World Politics 9: 383-400; and Easton's A Framework for Political Analysis, Englewood Cliffs, N.J.: Prentice-Hall, 1965), Sharkansky and Van Meter fail to encourage analysis of policy outputs as independent variables.

Perhaps political scientists, chasing the holy grail of "value-free" research, are simply afraid to analyze public policy as cause for fear of falling into advocacy of one policy over another. The opportunity to investigate public policy as an independent variable is outweighed by the risk they perceive in "espousal," however informed. Continued preoccupation with the policymaking process is the resultant "fallback" position in most of the policy analysis literature.

Unfortunately many policy researchers are ivory-tower residents and lack the real-world experience necessary to tell when government policies and programs are achieving their overt goals, much less their covert ones. How many political scientists, for example, are black and in a position to analyze government-sponsored programs to aid blacks from the vantage point of a consumer? Or how many are ex-convicts who go on to analyze correctional programs and policies? How many were street heroin addicts who subsequently research policy and progams in this arena? Given, we cannot experience everything, but most policy researchers do not have the "Ph.D. in streets" necessary to acquire a more accurate view from the bottom.


In accordance with current academic conventions, Part I of this book, "Formulating Heroin Control Policy," focuses on recent addiction control policy formulation at the federal level, emphasizing the interplay among key congressional and executive figures, scientific elites, "opinion leaders," executive branch agencies, and the rapid spread of heroin in the 1960s. This approach takes addiction treatment policy as the dependent variable. The goal is to locate systematically the generalizable and predictable factors that influence the decision-making process in the area of drug law enforcement, heroin addiction treatment and the rehabilitation of addicts. The question is: How do policymakers decide when they choose alternative addiction control strategies? Do they rely on professional and technological advice more than political and ideological suasion? Further, if they do utilize recommendations of scientists, what is the quality of scientific research in the area of addiction treatment program efficacy?

The Policy Impact School and Treatment Program Evaluation

Does political science stand to gain from the study of public policy as cause of changes in the environment? Focusing solely on the policymaking process is undeniably important; the decisional process itself directly impacts program results. Nevertheless, this narrow focus on policy as the dependent variable neglects the wider interests and forces that might exercise a crucial, more lasting influence. The emphasis on input factors closely related to the official making of decisions often leads scholars to underplay or ignore completely policy impact. Analysis of the actual effects of policies and of their repercussions on future policy options is too often left to news media pundits.

As scholars, political scientists should therefore concern themselves more with the quality of information that supports policy decisions. In planning for policy it is better to argue from reliable data than from mere conjecture. Planning would want to take into account the likely outcomes of policy decisions—What has actually happened out there in the real world when certain programs have been implemented? Were the desired results achieved? If not, why?

Politics, most certainly, does not end with the enunciation and implementation of official policies and programs (Easton's conversion and outputs). Policies have impacts, sometimes intended, sometimes unintended; they hit real people in the real world. Therefore, Part II of this book, "Heroin Addiction Treatment and Its Outcome," examines the impact of addiction treatment policies and programs. How much difference is there between the goals of addiction treatment policy and its actual impact on heroin addicts and other features of the environment, such as crime reduction?

Robert Lineberry and Ira Sharkansky in Urban Politics and Public Policy (New York: Harper and Row, 1971) recognize the importance of impact evaluation: "There may be no question for the policy-makers that is more important than policy impact" Professor Easton's A Framework for Political Analysis draws a distinction between "outputs" (roughly, what governments do) and the "outcomes" of these outputs (roughly, what consequences follow from the outputs—pp. 351-52). Sharkansky's Policy Analysis in Political Science (Chicago: Markham Publishing Company, 1970) also distinguishes between public policy, policy outputs and policy impacts (p. 63).

Thus, as Lineberry and Sharkansky note in Urban Politics and Public Policy, any public policy can be analyzed as an independent variable in terms of its impact on the community and on subcommunity units (pp. 10-11). The policy impact school studies the effects that government actions have upon the populations they are designed to impact. Some of these effects are encompassed by the terms "outputs," "outcomes," "spillover effects," "impacts" or "feedbacks" of policy.

Feedback from evaluative research on policy and programs is relatively new, but gaining in importance. See, for example, Donald Campbell, "Reforms as Experiments," The American Psychologist 24: 409-29; Carol Weiss, Evaluation Research: Methods of Assessing Program Effectiveness (Englewood Cliffs, N.J.: Prentice-Hall, 1972); Donald Campbell, "The Social Scientist as Methodological Servant of the Experimenting Society," Policy Studies Journal 2: 72-78; Daniel Glaser, Routinizing Evaluation: Getting Feedback on Effectiveness of Crime and Delinquency Programs (Rockville, Md.: National Institute of Mental Health, Center for Studies of Crime and Delinquency, U.S. Department of HEW, U.S. Government Printing Office, 1973); and Carol Weiss, "Policy Research in the University: Practical Aid or Academic Exercise?" Policy Studies Journal 4: 224-28.

Suffice it to say that more and more government decisionmakers at all levels are coming to depend on evaluative research reports when choosing among competing courses of action to combat social ills. The United States Congress, for example, is currently being guided in heroin control policymaking by a General Accounting Office evaluation report entitled "Action Needed to Improve Management and Effectiveness of Drug Abuse Treatment" (Washington, D.C.: Comptroller General of the United States, U.S. Government Printing Office, April 1980).

Until policy impacts are studied with the same zeal and skill that policy processes have attracted, political science may contribute little to the evaluation of exisitng policies or the consideration of possible alternatives. In this newer tradition of policy impact analysis, Part II of this book investigates the impacts of various narcotic addiction treatment modalities implemented under federal funding. This research is necessary to ascertain whether or not heroin addiction treatment policies and programs have achieved their desired goals or have been responsible for trends that make the realization of formal goals possible.

Based on such evaluation, policymakers might decide to discontinue treatment policies and programs that have had little impact. The purpose of the evaluative research undertaken in Part H is to measure the comparative effects of addiction treatment policies and programs against the goals they set out to accomplish as a means of contributing to subsequent decisionmaking about the programs and improving future programming. Presumably, the most prominent objective of any heroin addiction treatment program is somehow to change its clients so as to eliminate the reason they came to the agency in the first place. Whether such changes are achieved is the question addressed in Part II: Are addiction treatment programs helping addicts become abstinent? Turning to the extant literature, this question becomes very difficult to answer.


The overall quality of scientific research on addiction treatment impact is poor. Serious methodological mistakes beset most studies, making it difficult to obtain objective and reliable data on the relationship of treatment to outcome. Most studies are not clear, but obscure and vague. They abound with unsystematic, unverified and unspecified judgments on program efficacy. On close review this literature reveals many impact investigations that are narcissistic; they are palpably self-serving measures of policy/program "success" intended to defuse criticism and harness government funding. Even the most respected researchers in this field fail to address the weaknesses of their research designs. A good example is Avram Goldstein, Ralph Hansteen and William Horns, "Control of Methadone Dosage by Patients," Journal of the American Medical Association 234: 734, who rigged their research design to try and prove that methadone maintenance clients actually prefer lower dosages to higher ones.


The difficulties in devising controlled evaluations on addict populations are easy to imagine. Street "hypes" are highly mobile, sometimes deceptive, nonconforming and often an uncooperative group upon whom to conduct research. Nobody even knows how many addicts there are. L. G. Richards and E. E. Carroll, in "Illicit Drug Use and Addiction in the United States," Public Health Reports 85: 1035-41, estimated 65,000 American heroin addicts in 1965. David E. Smith and David J. Bentel, in Drug Abuse Information Project: Fourth Annual Report to the Legislature (Sacramento, Calif.: Drug Abuse Information Project, 1970) estimated 500,000 heroin addicts by 1970 (p. 5). The estimate of heroin addicts apparently dropped to 450,000 in 1978, according to the General Accounting Office's "Action Needed to Improve Management and Effectiveness of Drug Abuse Treatment" (Washington, D.C.: Comptroller General of the United States, U.S. Government Printing Office, April 1980, p. 2). This same report estimated there were 550,000 addicts only two years later in 1980. Given the difficulties of even counting addicts, the best the literature can offer as to incidence and prevalence are informed "guestimates."

Beyond the difficulty in counting addicts, treatment programs themselves discourage good research. Given the nature of treatment populations, programs have their biggest job in handling the mechanics of client intake, treatment and retention; few programs are geared for longterm scientific evaluations, says Stanley Einstein in his "Methadone Treatment: Views of Program Directors," in the Fifth National Conference on Methadone Maintenance: Proceedings, 1973, edited by the National Association for the Prevention of Addiction to Narcotics (New York: National Association for the Prevention of Addiction to Narcotics, 1973), pp. 718-22. In a word, program directors could not care less about research. The costs for this type of research are often prohibitive within the usual clinic budget. Further, appropriate staffing and good evaluative research are almost never available within purely clinical treatment teams. One result is that most evaluative research reports on addiction treatment display the following methodological weaknesses.


Most studies do not construct rigorous experimental research designs to determine treatment program efficacy. A classic example is Carl Chambers and James Inciardi, "Three Years After the Split," in Developments in the Field of Drug Abuse: Proceedings of the National Drug Abuse Conference-1974, edited by Edward Senay, Vernon Shorty, and Harold Alksne (Cambridge, Mass.: Schenkman Publishing Company, 1974). Such studies employ nonexperimental or quasi-experimental methodologies with no matched controls or reference groups for comparison. The result is that typical outcome studies conveniently "find" program impact or positive treatment effects without critically attempting to exclude competing research hypotheses.

The possibilities for other explanatory variables in the area of client outcome are so numerous and the number of potentially relevant factors so large that exhaustive explanation of outcome is really impossible. In the end, it may be scientifically impossible to isolate the specific program, client and situational variables that "cause" success or failure during and after program exposure. Maybe the best explanation yet is that some addicts simply "grow up"; they get sick and tired of being sick and tired. See, for example, Charles Winick, "Maturing Out of Narcotic Addiction," United Nations Bulletin on Narcotics 14: 1-7; and M. Snow, "Maturing Out of Narcotic Addiction in New York City," International Journal of the Addictions 8: 921-38.

This problem is handled in most evaluation studies by confining explanations of client outcome to factors judged by the investigators to be most significant, including the impact of treatment itself on client behavior or standard sociodemographic classifications of clients according to sex, age, education and race/ethnic group. Such is the case with hundreds of studies. Typical among them are Thomas Henchy, Byron Erickson and Jesse Paez, "The Relationship Between Age and/or Negative Experiences and Success on a Methadone Maintenance Program," International Journal of the Addictions 9: 221-27; Vincent P. Dole and Marie E. Nyswander, "A Medical Treatment for Diacetylmorphine ('Heroin') Addiction," Journal of the American Medical Association 193: 646-50; and William Aron and Douglas Daily, "Short- and Long-Term Therapeutic Communities: A Follqw-Up and Cost Effectiveness Comparison," International Journal of the Addictions 9: 619-36.

While other factors—like love—may be related to treatment outcome, they are often difficult to measure and are therefore frequently omitted from research designs. These factors include developmental history, lifestyle, drug and alcohol abuse history, criminality, work record, family relations, physical health and psychiatric history. Little is offered in most studies on these competing independent variables that influence client behavior. In other words, an extremely complex event like positive or negative treatment outcome is explained by selecting what are considered to be the most significant variables; other causal factors are neglected or ignored.

Most published studies also fail to take into account "system effects." For example, in studies that do employ an experimental research design, do the "treated" groups perform better along selected behavioral criteria because of the enthusiasm of staff and the attention lavished upon them (Hawthorne Effect)? Almost all the work of Vincent P. Dole and Marie E. Nyswander on their methadone maintenance experiments would seem to be open to this criticism. Further, many clients really "shape up" when the "feds" drop around for an on-site program evaluation. Or do some treated groups perform better because staff look the other way when they fail? Have police and other elements of the criminal justice system tended to look the other way if narcotics law violators are enrolled in a treatment program like methadone maintenance? For these possibilities, consider Wayne R. LaFave, Arrest: The Decision to Take a Suspect into Custody (Boston: Little, Brown and Company, 1964); James Q. Wilson, Varieties of Police Behavior: The Management of Law and Order in Eight Communities (New York: Atheneum, 1971), chapter 4; Donald E. Newman, Conviction: The Determination of Guilt or Innocence Without Trial (Boston: Little, Brown and Company, 1966), pp. 60-79; Frank Miller, Prosecution: The Decision to Charge a Suspect With a Crime (Boston: Little, Brown and Company, 1969), pp. 11-27; David W. Neubauer, Criminal Justice in Middle America (Morristown, N.J.: General Learning Press, 1974), pp. 117-18; and G. Hayim, "Changes in the Behavior of Addicts: A One-Year Follow-Up Study of Methadone Treatment" (mimeographed, 1972).


The literature is weak in the area of data interpretation and analysis. It is questionable, for example, whether the data presented in most studies are valid. We know, for instance, that client self-reported follow-up information and urinalysis data are notoriously unreliable. For example, Emile J. Pin, John M. Martin, and John F. Walsh interviewed program dropouts without corroborative performance data in their paper, "A Follow-Up Study of 300 Ex-Clients of a Drug-Free Narcotic Treatment Program in New York City," American Journal of Drug and Alcohol Abuse 3: 403-7. So too with George De Leon, Sherry Holland, and Mitchell Rosenthal in "Phoenix House: Criminal Activities of Dropouts," Journal of the American Medical Association 222: 636. On the serious pitfalls of the reliability of urinalysis data, see Virginia Lewis et al., "Nalline and Urine Tests in Narcotics Detection: A Critical Overview," International Journal of the Addictions 8: 169; and David Sohn, "Analysis for Drugs of Abuse: The Validity of Reported Results in Relation to Performance Testing," International Journal of the Addictions 8: 72. Finally, on the general unreliability of client self-reported data, see Thomas J. Cox and Bill Longwell, "Reliability of Interview Data Concerning Current Heroin Use from Heroin Addicts on Methadone," International Journal of the Addictions 9: 161-65.

Further, intake or "baseline" information drawn from addicts entering treatment cannot really be trusted. For example, a client's self-reported degree of heroin addiction is often unreliable, with frequent underplaying or exaggerating of its extent and duration. Nor do most investigations adequately explain different types of validity, how they are measured, the reliability of measures as a psychometric property and their methods of computing and interpreting reliability coefficients. The methodologies of most studies, therefore, are obscure.


Further, conceptual variables are not often clearly linked to operationalizations. For example, measures of "success" and client "failure" in follow-up studies do not all measure the same thing; operational definitions of success and failure differ somewhat from study to study, making it difficult to compare research reports in order to verify conclusions. Frances Gearing, for example, in "Successes and Failures in Methadone Maintenance Treatment of Heroin Addiction in New York City," U.S. Public Health Service Publication #2172 (Washington, D.C.: U.S. Government Printing Office, 1971) looked at alcohol and barbiturate use among discharged methadone maintenance patients but failed to examine heroin use, the reason they entered treatment in the first place. Success criteria in client follow-up studies of residential programs vary. Drug use, alcohol use, criminality, social functioning of some kind and health are areas in which client status on follow-up is consistently examined. Closer scrutiny, however, reveals that while the categories are similar, the definitions of these outcome measures vary wildly. Some studies examine use of licit and illicit drugs, demanding abstinence from both as the criterion of success. Others look only at illicit drug use after discharge. Compare, for example, DeLeon et al., "Phoenix House"; Chambers and Inciardi, "Three Years After the Split"; and Walter Collier and Yasser Hijazi, "A Follow-Up Study of Former Residents of a Therapeutic Community," International Journal of the Addictions 9: 805-26. All of this makes outcome studies difficult to verify. A scientific conclusion is verified when it has been checked or tested by other investigators (replication) and when they all arrive at the same conclusion. The curative powers of addiction treatment modalities are difficult to verify in this manner, since their operationalizations differ so widely.

Further, in the impact literature types of outcome data presented vary widely. Actual measures used include drug abuse, illegal activities, employment, program retention, psychophysiological functioning, mortality and morbidity rates, personality and social functioning. Some studies like Dole and Nyswander's "A Medical Treatment for Addiction" use only during-treatment data; others like Collier and Hijazi, "A Follow-Up Study of Former Residents of a Therapeutic Community," rely solely on posttreatment outcome along certain selected variables. Additionally, periods of follow-up vary from study to study, ranging from just a few months to several years. George Vaillant quite correctly recommends follow-up after five years in "A 12-Year Follow-Up of New York Narcotic Addicts: III. Some Social and Psychiatric Characteristics," Archives of General Psychiatry 15: 599-609. Moreover, a number of studies compare incomparable periods of time; they juxtapose baseline addiction histories of from two to twenty years with data derived from only a few months or a few years of treatment.


Most studies give improper attention to research population selection factors, such as limiting analysis to clients retained in treatment whil excluding all program dropouts from the study group. Collier and Hijazi, in "A Follow-Up Study of Former Residents of a Therapeutic Community," make this mistake. So do Pin et al. in "A Follow-Up Study of 300 Ex-Clients of a Drug-Free Narcotic Treatment Program in New York City"; and Chambers and Inciardi, in "Three Years After the Split." Virtually all of Dole and Nyswander's reported research findings eliminate dropouts from their study groups, thus neatly "cleansing" research samples.

Thus, study data usually are reported only for subjects who have remained in treatment for variable periods and not for the entire cohort ever admitted to the program. Since it is usually those very subjects with more severe personal or social problems (including continuing drug and/or severe alcohol abuse, arrest and incarceration) who leave or are kicked out of treatment programs, study samples usually "shrink" and consist increasingly of clients who are able to comply with program rules by refraining from "unacceptable" behavior. Eventually, most study samples reported in the literature consist solely of "successful" clients with the "unsuccessful" ones having been terminated and eliminated from the study group.

This kind of rigged sample selection process works to overestimate the positive effects of most narcotic addiction treatment programs. Generalizations from such reports are totally unreliable and, by the way, are policy plagues. Additionally, there is the problem (technique?) of preferentially loading study populations with older, whiter or socioeconomically better-off clients. Dole and Nyswander did this consistently. Their study samples never did reflect the characteristics of addicts listed in the New York City Narcotics Registry. This slants research results and renders them inapplicable to the total population of heroin addicts who have had treatment.


Finally, almost every study of heroin addiction is open to all the dangers of generalizing from "caught" populations of addicts in treatment to the whole universe of heroin addicts, including untreated, unarrested ones. The result is that pictures presented in the scientific (and popular) literature of "average addicts" are not necessarily representative of all addicts, especially those addicted to physician-prescribed narcotics. Thus, most study groups are confined to analyses of addicts snagged for one reason or another in criminal justice and treatment "nets." Significantly, these addicts are usually socioeconomically disadvantaged. This is why their "wheels fall off" and they either "get busted" or have to enter treatment. Drug experts rarely see heroin and other opiate addicts who are unarrested or untreated like the Hollywood producer's wife, Century City attorney or top-notch musician.


Perhaps most important, there is a theoretical thinness or frugality to most outcome studies. True, in human behavior there are no general empirical laws or universal generalizations. But theories of addiction treatment are usually collections of vague and loosely formulated generalizations and speculative hypotheses dressed up to look like theory. As such, these "theories" are not reliable, lack much explanatory power and hold little predictability. In other words, conceptual schemata are pawned off as real theories, useful perhaps as heuristic devices, but of little value in explaining or predicting. The result is that "theories" of addiction and its treatment are identified with everything from typologies, analytical schemata, confirmed experimental hypotheses and speculative hypotheses to the more traditional, normative ethical injunctions like "heroin is bad."


What, then, is the overall quality of impact research in this field—research on which policy choices must be based? It varies, of course, but most studies have not been well presented. Their purposes, methods, results, and other data are not clear, cogent or easily assimilated. The majority of them are so poorly designed that they are virtually useless in terms of accurately determining the impact of various forms of treatment on client outcome.


What is the exact conceptual realm and logical class of the so-called "heroin problem"7 The phrase itself is deceptively simple. What this country actually has is a multiplicity of heroin-related problems and a surfeit of policies and programs—often conflicting and self-defeating—to deal with them. When we talk about the "heroin problem," we are referring to an exceedingly complex and often confusing set of interrelated issues affecting society as a whole and individuals in lonely isolation. The "heroin problem" embraces everything from the personality characteristics and social-political-economic conditions that may underlie heroin dependence to the psychological and physiological effects of opiates themselves on different people. It includes both the economic costs and benefits produced by heroinism. It refers, perhaps most poignantly, to the price paid by addicts under a harsh prohibitionist policy—economic hardship, crime, jail, shattered family relationships, failing health and death.

Future research in the whole area of addiction and addiction control would do well to keep in mind the following network of people, activities, organizations and values, which together constitute a kind of "heroin addiction system." Though not a system in any servo-mechanical or biological sense of the term, it does demonstrate some patterned interrelationships and interdependencies. Each element within this heroin addiction system appears to serve the survival, adjustment or maintenance of others: others:

1. The active heroin addict population in the U.S.
2. Heroin suppliers, including producers and distributors.
3. The history, goals and outcomes of international, national and subnational narcotic law enforcement policies, laws, rules, regulations and programs aimed at reducing the supply of heroin through surveillance, arrest, jail, prison, civil commitment, fines, probation, parole and/or court-ordered diversion.
4. The history, goals, and relative outcomes of national and subnational heroin addiction treatment, rehabilitation and prevention/education policies, laws, rules, regulations and programs aimed at reforming heroin addicts through abstinence or chemically supported treatment and at preventing addiction by educating and giving information to nonusers.
5. The attitudes, beliefs, opinions, values, norms, economic interests, research, training and backgrounds which affect the reciprocal relationships between addicts, suppliers, law enforcement and treatment/ rehabilitation/prevention organizations and personnel.
6. Reappraisal of ongoing heroin abuse control policies in light of programmatic consequences and recommended changes in narcotic law enforcement and treatment/rehabilitation/prevention policies, laws, rules, regulations and programs.

This whole system helps define and generate propositions regarding the exact conceptual realm and logical class of heroin addiction, government response to it and what difference government efforts make. Although each group throughout the multiple layers of this system sees the causes, consequences and control of addiction somewhat differently, the interrelated, interdependent thread of people, activities, beliefs, laws and organizations intersect at various points. This conceptual schema includes all identifiable structures (values, norms, roles and statuses) functioning to fulfill: (1) the needs of addicts, and (2) those whose formal mission it is to control addiction either through law enforcement or treatment. We would want to know, for example, what would happen to heroin suppliers without addicts. Or, if heroin faded from the scene, what would be the impact on police narcotic details and other elements of the criminal justice system which are geared to fpcus mainly on heroin? If heroin disappeared tomorrow, what would be the impact on addiction treatment programs and personnel?


Our valuable member David Bellis has been with us since Tuesday, 20 November 2012.

Show Other Articles Of This Author