Go Figure

Statistics are a funny thing.  You can make statistics do just about anything you want them to do.  For instance, if I give 100 people a dollar bill and ask each of them to hold it for me for the next week; I might find the following in a week’s time: 


  • 16 people lost it;
  • 29 gave me a dollar bill back after the week, but it wasn’t the same one I gave them;
  • 20 will return the original bill;
  • 3 will ask for change;
  • 12 spent it and claimed they misunderstood me; and
  • I will never be able to locate the other 20. 

From that information there are people who will discover that 77% of people can’t be trusted.

I can see where someone might come up with that conclusion.  But was I looking for that outcome?  Perhaps the conclusion sought was to determine what people did with money when it was given to them, and nothing more.  Lately, I have noticed that a statistical conclusion has very little to do with the facts which were accumulated.  For instance, I read about a program in which offenders who participated in a particular substance abuse program had a ten percent recidivism rate.  That sounds impressive until you begin to ask questions.  I discovered that the results were based upon whether the participants were re-arrested in a six month period – by the same law enforcement agency.

I would hope that when participants in a substance abuse program are monitored they are monitored about whether they have remained straight and sober.  There are too many variables to conclude recidivism rates from a substance abuse program.  And six months is hardly a standard to use in creating a baseline.

Drug Policy Alliance has released a report, Drug Courts Are Not the Answer: Toward a Health-Centered Approach to Drug Use*.  The Report focuses on the effectiveness, or rather the ineffectiveness of drug courts.  However, in the report there is a small segment that addresses the methodology of accumulating and interpreting data.  I can’t say it any better than quoting from the Report:

As one researcher testified at a congressional hearing in 2010, “Over half of the criminal justice programs designated as ‘evidence-based’ programs in the National Registry of Evidence Based Programs includes the program developer as evaluator.  The consequence is that we continue to spend large sums of money on ineffective programs (programs that do no good, and in certain circumstances actually do harm). 

It’s troubling to realize that the developer of a program is the evaluator.  I’m not a statistician, but it seems to me that there needs to be more independent oversight and analysis of programs before a practice can be called ‘evidence based’.

By the way, my conclusion to the example above is that I’ve probably made a poor investment and lost $48.  Now, that would be evidence based.  Don’t you think?

*You may obtain a printed copy of the Report by contacting the Drug Policy Alliance and requesting a copy.  Tell them Marty and Stephanie sent you.  However, clicking the link above will get it to you quicker.

Ethan Nadelmann
Executive Director
Drug Policy Alliance
70 West 36th Street, 16th Floor
New York, NY 10018


This entry was posted in Criminal Justice, Issues, Substance Abuse and Alcoholism/War on Drugs and tagged , , . Bookmark the permalink.

1 Response to Go Figure

  1. Jorgen Rasmussen says:

    I’m inclined to say what the gun lovers always do–people, not guns, kill people. Similarly ain’t the statistics it’s the people who use them. You always have to look at the details to know whether the people citing them have an axe to grind or are honest researchers. To damn all statistics is to say that you want to rely on comments like “I guess that..” or “it’s likely that…” or “it stands to reason that…” It’s not just in baseball that statistics matter.

Leave a Reply

Your email address will not be published.