Wednesday, January 9, 2013

Depressed? Try Life Unsweetened

People who should know better insist on trying to discover links between diseases and lifestyle choices by doing studies that cannot tell us whether there is a cause-and-effect relationship.

Obviously I wish they would stop.  But they won't.  They keep doing these studies, and presenting them at medical meetings, and publishing them in medical journals.  And, not content to regale their colleagues with this nonsense, they look for a wider audience, an audience full of people who may not know enough about epidemiology or biostatistics to say, "Rubbish!"

Sometimes they don't even wait for their colleagues to say, "Rubbish!" before seeking that unsuspecting wider audience.  And so yesterday there came a press release from the American Academy of Neurology (AAN).  The message: sweetened beverages are linked to depression, and artificially sweetened beverages are even worse than the ones with sugar.

You know I'm going to tell you why this is rubbish, but instead of going right to that, let's approach this as if we were scientists interested in whether it might not be rubbish.  We would take this notion and turn it into a hypothesis to be tested using something that would actually qualify as science, in stark contrast with the subject of the AAN press release.

We would start by recruiting a large group of people.   Ten thousand would probably be enough, but there are statistical calculations you can do to figure out how many you'd need. The recruits would have to be willing to be randomized.  That means accepting assignment, generated at random by computer, to one of two groups.  The first group would be told they should drink artificially sweetened beverages as much as they want to, and they would be asked to keep a diary of their beverage-consuming behavior.  The second group would be told to keep a diary, too, but that they should really try to avoid drinking artificially sweetened beverages.

Then, periodically, the subjects would be asked to come in to have their diaries reviewed and to be asked a bunch of questions.  They wouldn't know what exactly the study was about.  The questions would be sufficiently wide-ranging that they couldn't figure it out.  The interviewers wouldn't know which group the subjects were in or what information was revealed by their diaries.  They also wouldn't know the purpose of the study.

The questionnaire would, of course, include numerous questions of the sort used to screen people for depression, and the subjects would be asked if they were being treated for any physical or mental health problems.

After we'd followed the subjects for a decade or so, we could see what differences, if any, emerged between the two groups.  The information in the diaries could tell us how well the subjects did what they were asked to do (with the caveat of "self-reporting bias"), and it could enable us to exclude from a portion of the analysis, if we were so inclined, subjects who really didn't do what they were asked to do.

This is what medical scientists call a prospective, randomized, double-blind, controlled trial.  This is toward the top of the hierarchy of what we call evidence-based medicine (EBM). (If you look at the pyramid, you'll see the only thing higher is the systematic review, in which smart people look systematically at the medical literature for all the studies that might shed light on a specific question, evaluate the studies, and tell us what the accumulation of scientific evidence reveals.)

This would not be an easy study to do.  To begin with, you can imagine the challenge of recruiting a large number of subjects willing to accept random assignment to one group or another with implications for a specific lifestyle choice that would go on for years.

So, not surprisingly, the investigators whose findings were reported in the press release from the AAN did not do that study.  Let me rephrase that, for emphasis: they did not do the only kind of study that could answer the question, rather than merely generating a hypothesis.

No, instead, they did an observational study.  They took a large number of people - more than a quarter of a million - and collected information about their beverage-consuming habits for one year, and then ten years later asked them if they'd been diagnosed with depression.

The first problem is obvious.  We don't know anything about their habits outside of that one year.  We also don't know if they were really depressed, only whether someone had made that diagnosis.  There may have been many people diagnosed with depression who didn't really meet accepted diagnostic criteria. Even more likely is that there were quite a few people who were depressed but hadn't been diagnosed.

The biggest problem, however, is that we have no way of knowing what other factors - and there could be many - might be what scientists call confounders: other influences on the likelihood of developing depression that might themselves be linked with the consumption of sweet beverages (whether naturally or artificially sweet).

The most obvious potential confounder is obesity.  It is so obvious that one would hope the researchers looked for differences in the incidence of obesity and tried to adjust for such differences.  That way, if people were more likely to be depressed because in our society it is sad to be fat, and fat people are more likely to drink sweet beverages with sugar (which help to make or keep them fat) or artificially sweetened beverages (because they are trying to lose weight), we can "correct" for the fact that more of the people who had been diagnosed with depression were also fat and, through statistical manipulation, try to eliminate the influence of that confounder.

The trouble with confounders is twofold.  First, statistical manipulation to correct for them is imperfect.  Second, and most important, is that you can try to correct for confounders only if you identify them.  And there may always be confounders that you have not identified.

The purpose of doing a randomized controlled trial is that, with a large enough sample size, the randomization is very effective in eliminating differences between the two groups (other than the one difference you wish to study) that might occur by chance.

So the sweet-beverage drinkers were a bit more likely, ten years later, to say they'd been diagnosed with depression.  And the ones who drank artificially sweetened beverages were a tad more likely than the sugary-drink consumers to say so.

There are questions that might not occur to you but that do occur to me because I practice in an academic medical center and review manuscripts submitted for possible publication in medical journals.  First, why would anyone do a study that cannot answer any important question?  Trying to put a positive spin on this, I'd say sometimes finding correlations or associations is interesting even if we know nothing about causality, and sometimes such findings can generate hypotheses worthy of being tested in a study which can get at the question of causality.

Second, why would a medical journal publish such a study?  I don't have a good answer for that one, because the results are not, I think, important enough to justify publication.  Third, although it is common for research findings to be presented at medical meetings when they haven't been (and may never be) accepted for publication, why would a medical association issue a press release drawing attention to such findings?

My favorite beverage.
I am an entirely
 uncompensated spokesperson.
Sadly, the answer is all too simple.  Results like this will capture the attention of the popular press and the legions of journalists who have no idea how to put them through the filter of intelligent skepticism.  They will then post and print articles about the findings, generating PR for the medical group.  And the unsuspecting public will think maybe they should stop drinking Pepsi One or Coke Zero.


This blog has never won any awards.
I expect that to remain true.



But not you.  Because you read this blog.  And you will say, "Rubbish!"

4 comments:

  1. Oops! Sorry! I forgot that some readers might not understand American idiomatic expressions or abbreviations. Public relations. In other words, issuing the press release draws attention to the American Academy of Neurology. Because the meeting at which this research is to be presented will be taking place in March in San Diego, such an intriguing press release might even get more journalists to attend the conference to report on the scientific research. After all, any journalist from the northern U.S. would consider covering a meeting in San Diego in March to be a plum assignment.

    ReplyDelete
  2. Thanks.
    ANd placing systematic reviews above RCTs - I too can write American ! :-) - (randomized controlled trials) very questionable as I understand you might somewhat agree !
    Let alone the meta-analyses...

    ReplyDelete
  3. Ah, Alberto, the placement of the systematic review at the top of the hierarchy of EBM depends on the systematic review being done properly, which means the quality of the studies reviewed must be evaluated rigorously and weighted accordingly. This is essential to avoiding the "garbage in, garbage out" phenomenon, which plagues many meta-analyses.

    ReplyDelete