We all know - or at least I hope we do - that we should not believe something just because it appears in print. The Internet has been described as "The Wild West" because it has essentially no rules. It is not a source of information. Rather, it is many sources of information, some excellent, some quite useless, most in between. The Web site snopes.com exists to help us separate Internet fact from fiction.
The more discerning among us know we should apply an ample measure of skepticism to more traditional sources of information, too. The fact that something appears in The New York Times or The Washington Post does not mean it can be taken as truth. Those publications do have rules about what gets into print, and editorial oversight, but they are far from infallible.
This is very notably so when news outlets and other publications for the general reader - the "popular press" - run articles on specialized subjects. Ask any physician about the accuracy of reporting on topics in medicine, and you will get an earful.
But what of the publications doctors read for the latest scientific evidence: medical journals? There are four major English-language journals that publish studies of interest to a broad range of physicians, across specialties: the Journal of the American Medical Association (JAMA), the New England Journal of Medicine, The Lancet, and the British Medical Journal (now officially called simply BMJ). Are these prestigious publications rock-solid sources of reliable information? In a word: no.
Many readers are now aware of the debacle involving The Lancet, in which a 1998 study linking autism to vaccines for children was retracted last year. The lead author of that study, Dr. Andrew Wakefield, was guilty of grossly unethical conduct as a medical researcher. Wakefield's appalling actions are far beyond the scope of this entry, but I encourage you to look it up and read about it.
I like to think studies published in some journals are well worth reading. The foremost journal in my specialty, Annals of Emergency Medicine, is an excellent example. But of course I would see it that way. Annals is the official journal of the American College of Emergency Physicians (ACEP), and I have two relevant (but unrelated to each other) sources of bias. I review manuscripts for Annals, and I serve on the Board of Directors of ACEP. The fact that Annals was ranked one of the 100 most influential scientific journals of the last 100 years by the Special Libraries Association means it is noteworthy, not that it is perfect.
Doctors read medical journals that are "peer reviewed." (For Annals, and for another major emergency medicine journal, I am one of many peer reviewers.) That means manuscripts submitted for publication are assigned to an editor, who then seeks review and comment from members of a large panel of experts. Each manuscript is typically reviewed by 3-5 such experts. Most manuscripts are reviewed and rejected. Some are returned to the authors with extensive recommendations for revision, following which they may be resubmitted for further consideration. Uncommonly manuscripts are accepted "as is" or with minor revisions.
Doesn't this process assure that any article that appears in a peer-reviewed journal offers reliable information that doctors can use to keep their medical practice up to date? The peer-review process is valuable, but its "stamp of approval" does not mean the journal's subscribers can read an article, note the authors' conclusions about what their study proved, and take those conclusions to the bank.
So how do doctors learn to read medical journals and separate the good from the not-so-good? Journal Club.
During residency, those years of specialty training after medical school, doctors have sessions, commonly monthly, during which articles from medical journals are reviewed and discussed. At Journal Club, as these sessions are known, they learn from their faculty how to "dissect" a study and tell whether its findings are believable or important. What question(s) did the investigators set out to answer? Was their study well-designed to answer those questions? Were their methods sound? How about their statistical analysis of the results? Is the article's "discussion" section a thoughtful and logical consideration of how the results should be interpreted? Do the authors identify and describe any limitations that might affect the reliability of their study or the broad applicability of its findings? Do they say they have discovered scientific truth, or do they admit further study is needed? Do their data ultimately support their conclusions?
I have a confession to make: I live for Journal Club. There is nothing in life - or at least in medicine - that gets my intellectual juices flowing like teaching residents how to take an article published in a peer-reviewed journal and discover that the peer reviewers were asleep at the switch, that the journal's editors should be ashamed of themselves for having let this one make it into print, that their own conclusion upon finishing their reading is that they have found a good bird cage liner - although, admittedly, the lead author's mother surely thought it was a fine study. Occasionally, of course, we review a paper that is good science with robust conclusions that should influence the way we practice medicine. It is important to be able to identify those studies, too.
But the good studies are in the minority, and the essential value of Journal Club is the opportunity to learn what not to believe - and why.
Residents, I hope you are paying attention. Patients, if you didn't already know your doctor was busy during the hours not directly involved in seeing patients, add critical reading of the medical literature to the seemingly endless list of tasks.
Excellent discussion, as always. I am about to finish my BS and it's amazing how many fellow students I observed using sources that were not credible, let alone peer reviewed. I'm sure the residents you will begin teaching in your new job will benefit from your insights as well as Journal Club. What types of articles do you pick in particular? New procedures and treatments, or articles on refining already established medical science? Probably a combo of both. I always look at new EMS journal entries will a hint of skepticism... Maybe it's just my resistance to change, but every time the AHA shakes up my protocols, I can't help but want to look at the reasoning behind it.
ReplyDelete