The answer can be straightforward. It may be quite obvious that it's really not a heart attack but indigestion. Some acid produced by the stomach has escaped upward into the esophagus. The lining of the esophagus doesn't have the stomach's natural protection against that acid, and inflammation, with a burning pain, ensues. Reflux esophagitis. No, it's not your heart. Pain from a diseased gall bladder can also mimic a heart problem. And we can often tell the difference fairly easily. Sometimes pain in the ribcage can be worrisome to the patient yet readily distinguished by the astute clinician.
It would make my job so much easier if the answer were always obvious. When it really is the heart, it can be quite clear. Anyone who has been a paramedic, or a nurse or doctor who takes care of emergency patients for years, can often tell when first laying eyes on such a patient. Uh-oh. Some heart attack patients just have that look. And then the story, and the findings on physical examination, and the appearance of the electrocardiogram (ECG) just confirm the initial impression.
Sometimes the story and the examination are not clear-cut, but the test results give us a definite answer.
Yet patients with symptoms suspected of being of cardiac origin are a constant challenge because the initial evaluation in the ED - history, examination, ECG, lab tests - still leaves us uncertain about whether this is really a heart problem. And the patient will require further evaluation. Uncertainty is not a good thing when what we're uncertain about could be life-threatening. People expect - and it seems to me a very reasonable expectation - a high level of diagnostic certainty when there is concern about something that could kill them.
Just what level of certainty should we be going for? I have written before about the quest for diagnostic certainty, and about how we - and by "we," I mean both doctors and patients - should be asking ourselves whether it makes sense to devote a lot of time, effort, and money to additional testing to raise the level from 93% to 97%. But remember, if it's 93%, that means being wrong roughly one time out of fourteen. I'm OK with being wrong one time out of fourteen for some things. For example, if I think you have strep throat and want you to take penicillin, and I think it's unnecessary to waste time and money on a confirmatory test, what happens if I'm wrong? The penicillin is very unlikely to cause any harm. Serious reactions are quite rare, and I don't worry much about increasing bacterial resistance to it, because most bacteria are already resistant to it. The strep that causes throat infections just happens to be an especially dumb bug, so penicillin still works.
But I don't think it's OK to be wrong one time out of fourteen about whether you have something that could cause severe, permanent disability or sudden death. Neither do most of my colleagues or most of my patients. I guess some people are OK with that level of risk, but then some people jump out of airplanes, too. Yes, I realize that the frequency of the parachutes not opening is far less than one out of fourteen, but when it happens, the results are ... well, you know.
So we tend to be very cautious with ED chest pain patients. We have all sorts of "tools" at our disposal to try to figure out what's wrong with them, and in particular whether it's a heart problem. This is an area of intense clinical research: what is the best strategy for assuring that, if we send somebody home from the ED, the likelihood of something bad happening is vanishingly small? (Or at least down in the 1% range, because no strategy that involves human beings is going to be right 100% of the time.) We have a standardized name for the "something bad happening," too: MACE, which stands for Major Adverse Cardiovascular Events. Our medical journals have many published papers on strategies for evaluating ED chest pain patients to assure that the likelihood of MACE in the next 30 days (the most commonly used time period) is as low as possible. Oh, and just in case you are worried about day 31, which I am, the better studies keep track of the patients out to a year.
You probably won't be surprised to learn that most of the patients we worry about turn out not to have heart disease as the cause of their symptoms. That's what happens when you try to make sure you aren't missing anything. The more diligently you pursue the goal of catching every potential life-threatening problem (thus avoiding the "false negative" result of your evaluation), the more false positives you are going to get: we were worried about your heart, but all of the tests are normal, so we're now as certain as we can be that it's not your heart. The perfect strategy for evaluating anything would have no false negatives and no false positives. Unfortunately, that doesn't exist.
You probably also won't be surprised to find out that there is pushback. All of this evaluation, and especially some of the more sophisticated testing, costs money. If you are a legislator, a regulator, a bean counter, or for any other reason interested in controlling health care expenditures, you are going to take a hard look at a part of the health care system that devotes significant resources to a specific patient population and seems to have a "low yield" relative to dollars spent. An editorial in the New England Journal of Medicine last year said this:
The underlying assumption ... is that some [more definitive] diagnostic test must be performed before discharging these low-to-intermediate-risk patients from the emergency department. This assumption is unproven and probably unwarranted.Now, I'm as much of an enthusiast as anyone I know for the cost-efficient practice of medicine. And so I'm torn between two impulses here: the first impulse is to agree with an editorialist who clearly shares my bias in favor of a cost-efficient approach. The second impulse is the one that always seizes me, making me rub my hands with glee, any time I find that one of the world's most prestigious journals has published something stupid. Remember, I'm a critic. And this statement is most assuredly stupid.
Guess who has the job of "stratifying" risk in this patient population? Yes, 'tis I, your faithful blogger. I'm the one who has to separate the high-risk patients from the rest and then decide what to do with the low-risk and the intermediate-risk patients. And so I read the journals, always looking for the best strategy. There are certainly some low-risk patients who can go home after a simple, brief, and relatively inexpensive evaluation. Everyone else needs more. And it's my responsibility to identify the cut-point. Sometimes it's not straightforward. I have a conversation with the patient about my risk analysis. Some patients are worriers and very nervous even about fairly low-risk situations. They are afraid to go home from the hospital without a higher level of diagnostic certainty. And I respect that. Other patients are skydivers. They want to stay in the hospital as much as I want to go to ... oh, pick something, because whatever I pick (say, Wagnerian opera), I'll offend somebody who likes it. "Doc, your best guess is that it's probably not my heart? That's good enough for me. I'll call my doctor Monday and then she and I will decide about further testing."
Recently one of the emergency medicine residents (doctors in training) I supervise had a patient with chest pain. The initial results came back, and we agreed the patient should stay in the hospital overnight for further evaluation. The resident called the hospitalist - an internal medicine specialist whose practice consists of taking care of hospitalized medical patients - and found that the hospitalist thought the patient should be discharged. But he came to the ED, saw the patient, and arranged for him to stay the night and have further testing. He gave the EM resident a copy of a paper published in a major journal on this very subject, to support his opinion.
Last week we had Journal Club, that monthly gathering at which we discuss important papers from medical journals that might have implications for the practice of our specialty. This particular resident and I quickly agreed that we should include this paper. By now you can tell that, after my first quick read, I was rubbing my hands with glee. It was the worst paper I'd ever read on the subject.
Let me reiterate: I am an enthusiastic proponent of the cost-efficient practice of medicine. The percentage of our Gross Domestic Product that we spend on health care is now in the high teens. At the beginning of my career it was in the low teens. The upward slope cannot go on indefinitely. We have to find ways of getting better health outcomes for less money. And I believe we can do that. But we have to be smart about it. I volunteer for the job of deciding what's smart.
No comments:
Post a Comment