Could alcohol get a licence as a drug for depression? How do you test for the safety of a drug that causes the same side effects as the disease it is used to treat? These are just two of the points I didn’t have room for in my post last week on randomised controlled trials (RCTs) and why they don’t tell you what you want to know. (More on these points below.)
The post sparked quite a lot of twitter interest, praise mixed in with less flattering comments. One tweeted that I was “a quack and a crap journalist” (quickly withdrawn), another just commented “oh dear, oh dear” at the mention of my name. None of the critics addressed any of the issues preferring to imply the problems were all known and fixable.
So I’m coming back to RCTs this week because I think their flaws need more serious attention. At first sight RCTs appear very straightforward and an obviously good thing – two groups, one gets the real thing the other gets a pretend version (placebo) and that tells you if the treatment is effective. On closer inspection, however, they turn out to be rather more slippery and open to all sorts of misleading manipulation.
By a useful coincidence a particularly vivid example of the slipperiness of this so called “gold standard” for evidence based medicine arrived in my mail box yesterday. Sales reps are supposed to accentuate the positive but those promoting pharmaceutical drugs would make Candide look gloomy. Researchers filmed the sales pitch (how did they get permission?) by reps to 255 doctors in America, Canada and France and then rated how accurately they represented the drugs’ known side effects.
In how many of the interactions do you think the reps provided “minimally adequate safety information”? Precisely 1.7 per cent! And these weren’t drugs usually described as “well tolerated”. Nearly half already had formal warnings of serious side effects yet these were mentioned in just 6 per cent of the interactions. This is bad enough; even more alarming for patient safety was that the doctors thought they were getting reliable advice. They rated quality of the scientific evidence they were given as good or excellent in half of the presentations.
Supporters of the system, who assert fiercely that RCTs are the best way to distinguish between “real” medicine and the quack stuff, increasingly admit that yes there are short comings, that companies do hide unfavourable results and fiddle statistics (and presumably now, that pharma reps can be economical with the truth) but that all this is fixable. We just need to enforce the rules properly and punish offenders – like sorting out the banks – and then it will all work fine.
But the criticisms made by psychiatrist Dr David Healy and others that …
Read more: What is wrong with randomised trials Part 2