Or so said a story run by Reuters Health, a respected news service, on Monday (May 9, 2011). Reuters was reporting on a study published in the Journal of Clinical Oncology. To be fair, the Reuters article did explain that the association found by epidemiologists didn't exactly prove cause and effect, but that inconvenient fact didn't stand in the way of the headline writer: "Acetaminophen Tied to Blood Cancers."
If you think you can endure an explanation of the difference between an association and a cause-and-effect relationship, bear with me. If not, there is surely a basketball or hockey playoff game you could watch.
Here's what epidemiologists do. They take a big group of people and ask them about their habits - in this case, the relevant habit is their use of pain relievers. And they divide them into subsets that are distinct from each other with respect to what they report about this particular habit. So, one subset uses acetaminophen more than a certain amount, the other less. They then compare the two subsets to see whether they are different in the proportion who are afflicted with certain diseases. They try to match the subsets to assure that they are similar to each other in other characteristics that might affect their likelihood of getting these diseases. To the extent that they are not perfectly matched, they perform statistical manipulations to adjust for those differences.
It is important to understand that epidemiologists are truly expert in collecting and analyzing this sort of information and in making the statistical adjustments that enable them to discern potentially meaningful associations. So are there pitfalls that even their expertise cannot eliminate? By now you can predict the answer.
First, when they try to match the groups being compared, they have to think of all potential "confounding variables." Now, there's a problem. Since they don't really know what all the factors are that might influence a person's risk of developing a "blood cancer," they cannot properly match the two groups, can they? Nor can they adjust for whatever differences there may be between the two subsets in variables they haven't even identified. And, of course, all this assumes that what people report about the habit researchers are scrutinizing is accurate.
If we really wanted to know whether acetaminophen has an important effect on your risk of getting a disease, we would recruit a very large group of people - say ten thousand or so - who would agree to be randomized to one of two groups. One group would be instructed to take acetaminophen whenever they wanted, for whatever they wanted, and keep a record of their use of that medicine. The other group would be instructed never to use acetaminophen at all. We would hope that everyone in the two groups would follow their instructions and that they would stay in touch with us so we could keep tabs on their state of health and see if they developed diseases at different rates over the next ten or twenty years. Even if you've never given a thought to such medical research, it is surely obvious to you that doing such a study would take a lot of work - and a very long time.
But that's what it takes to demonstrate an association that, in all probability, truly represents a cause-and-effect relationship.
We could look at a population of subjects and ask them about whether they ride in pick-up trucks and whether they watch rodeo events on ESPN2 or Versus. I'm guessing there would be an association. And I think you know that doesn't prove riding in pick-up trucks causes people to watch rodeo events on TV.
I hope the people who write for Reuters Health know this, too. I really think they do. But why let that stand in the way of an eye-catching headline?
No comments:
Post a Comment