Chasing other links I stumbled across this:
As a proportion of the drug response, the placebo response was constant across different types of [antidepressant] medication (75%), and the correlation between placebo effect and drug effect was .90. [...] These data raise the possibility that the apparent drug effect (25% of the drug response) is actually an active placebo effect.In English: it's possible that all reported effects of antidepressant medication are placebo effects. It's quite likely that 75% of reported effects are placebo.
More recently, these authors replicated the work using studies from the FDA database. FDA studies are done as part of drug approval processes. They're generally not published in refereed literature and there have been accusations that they're slanted in favor of the drugs. That is, the studies are done in order to speed drugs through the approval process and may overreport positive effects and underreport negatives. So in theory you'd see a larger effect for ADs if you analyzed these studies. In fact:
Kirsch et al. reported an 18% difference between drug and placebo.The debate is actually fairly nuanced. There's been some good work recently looking (via MRI) at the neurochemical mechanisms triggered by placebos. There is some evidence that the difference between AD and placebo increases with the severity of depression, though that might be due to differences between active and inert placebos - people with serious depression are often familiar with the side effects of ADs and can recognize when given a sugar pill that doesn't have the expected side effects.
Nobody's claiming that ADs don't work. The claim is that they do work... and sugar pills work, too.
Title: "Listening to Prozac but hearing placebo: A meta-analysis of antidepressant medication." Authors: Kirsch, Irving; Sapirstein, Guy Publication: Prevention & Treatment. 1(1), Jun 1998 <http://content.apa.org/journals/pre/1/1/2> Title: "The emperor's new drugs: An analysis of antidepressant medication data submitted to the U.S. Food and Drug Administration." Authors: Kirsch, I., Moore, T. J., Scoboria, A., & Nicholls, S. S. Publication: Prevention & Treatment, 5, art. 23 <http://journals.apa.org/prevention/volume5/pre0050023a.html>
no subject
Date: 2007-04-27 05:31 pm (UTC)The study periods I saw were six weeks and six months. I believe that under six weeks you may not get the full effect of an AD. Can't say why the six month period was picked.
what could be tested for in a trial of this type.
That was one of the reasons for replicating the work with the FDA database. If you wish to say (as the Feds do) that the FDA study criteria are sufficient to establish the safety and effectiveness of a drug that is to be prescribed to patients then it's a very strange claim to then say "well those studies are no good."
That said, you're not the first one to say this. Certainly there's a body of objection to Kirsch that says "if you measure in a properly constructed study you do see significant differences." Fair enough, but then the onus is on the claimant both to state what a proper study would be and to say why a study is sufficient to get a drug approved and onto the market but somehow is not sufficient to distinguish its effect from a placebo effect.
actual lying to some and telling them they are getting a placebo when they are not would be extremely valuable to see if the efficacy is impacted.
There are two problems with that. One, as I mentioned earlier, is that you have a very hard time getting approval for a study protocol that bald-faced lies to people. You can mislead or conceal, but lying it pretty much right out because it destroys informed consent. Two, it messed with the notion of double-blind. The point is that not only does the patient not know what pill they're getting, the administering person doesn't know either. That removes effect and observational biases. (Or at least reduces them; again, see the discussion of active vs inert placebos.)