We seem always to be looking for health benefits from things that might otherwise be seen as "guilty pleasures," such as red wine, coffee, and chocolate. And we also look for additional health benefits from things we do for a specific reason. A high-fiber diet, for example, may not only help keep you "regular" (don't you love our euphemisms for digestive functions?) but may also help lower cholesterol and reduce your risk of colon cancer.
So it comes as no surprise that medical scientists are investigating whether influenza vaccination may be good for something besides making you less likely to get influenza. They probably think we need extra motivation, because many of us don't take the flu that seriously, and we're not keen on getting shots. Maybe if we think it's good for more than just protection from the flu, we'll be more likely to go for it.
In this instance, Canadian researchers in cardiology have found that vaccination seems to have beneficial cardiovascular effects: specifically, a substantial reduction in the likelihood of heart attack, stroke, or death from cardiovascular causes.
When I started reading the article reporting this research, in a popular news outlet, I thought it was an association that could have any number of possible causes. We see so many studies of that sort. Scientists look at people with various health problems and see that those who got a particular intervention (like a vaccine) had fewer bad things happen to them. Then the question is always whether it was the intervention of interest that conferred the benefit, or just that people who received that intervention were receiving regular medical care of all sorts, and who knows what, among all the things done for them, was really responsible. Even when you try to adjust for differences in all of those other things, you can still be missing possibly responsible influences you didn't think of.
But this study was not reporting an association in search of a possible cause-and-effect relationship. These investigators took a population of patients (with a reasonable sample size) and randomized them to influenza vaccine or placebo. That's the kind of study it takes to see whether the one thing you're interested in is responsible for observed differences in outcomes. If the sample size is large enough, and the patients are randomized to one intervention or another, or intervention versus placebo, all of the other factors that might cause different results for the two groups of patients should be very similar, thus isolating the one difference you're studying.
The results, as reported, are pretty striking: a 50% reduction in heart attack or stroke and a 40% reduction in mortality. And those results seemed to apply to patients both with and without previously diagnosed cardiovascular disease.
If this is real, the question is why? There have been many studies over the years looking at the relationship between infection and inflammation and bad things happening in blood vessels. Some studies have suggested that patients who'd been treated, for one reason or another, with certain kinds of antibiotics over the years seemed to have fewer heart attacks. Researchers guessed that certain infections might predispose to inflammation, and subsequent development of vessel-narrowing plaque, in coronary arteries.
The U.S. Centers for Disease Control and Prevention is telling us everyone should get the flu vaccine every year. Many of us are not listening. I am in the camp of the skeptics, as I am in relation to just about everything. I want to see evidence that the vaccine substantially lowers my statistical likelihood not just of getting influenza but of becoming seriously ill with influenza, and that the magnitude of this benefit greatly exceeds the magnitude of the risk of a serious adverse reaction to the vaccine. Over the years I have found the evidence of such a favorable risk-benefit calculation to be reasonably convincing for older folks and those with chronic diseases (heart and lung diseases and diabetes), but not so convincing for younger and otherwise healthy people.
And I've been less than impressed with the scientific evidence that all health care workers should get the vaccine to keep from spreading the flu from their infected patients to others who are susceptible. It makes sense, but the evidence that it really works that way just isn't compelling. So I get my flu shot to keep my employer happy, but I remain skeptical.
But the idea of other benefits has definite appeal. I often take ibuprofen for various aches and pains. The fact that there is a little bit of suggestive evidence that it reduces the likelihood of developing Alzheimer's Disease suits me just fine. If I'm going to do it anyway, an unexpected benefit is welcome. Now, that is an example of an association that may or may not have causality. No one has done a randomized, placebo-controlled study and followed patients long term, which is what you'd have to do, because that's a disease that develops over a period of many years.
In this case, however, the causality may be real, because the study was done in such a way as to figure that out. Notice I say it may be real. Why am I still skeptical? Well, to begin with, I haven't read all the details of the study. I read a report in the popular press of the presentation of the study's results at a medical meeting in Toronto. I don't know if the study has been accepted for publication in a reputable, peer-reviewed medical journal. Once that happens, if it does, I'll be able to read the paper and draw firm conclusions about its results. Many papers are presented at meetings and never get published. And many papers that get published don't really prove what the authors think or say they do. And then, of course, any important scientific study should be reproducible - meaning if other scientists conduct another study in the same way, they should get similar results. Reproducibility is essential to credibility in scientific investigation.
So all I can say right now is that this is very intriguing, and if it turns out to be real, we will all have another reason to get the influenza vaccine each year.
Thursday, November 29, 2012
Friday, November 23, 2012
Breast Cancer Screening: Can We Think Too Pink?
The experts at the American Cancer Society (ACS) recommend screening for breast cancer by mammography every year for women over 40. Wow. That's a lot of testing. Judging by conversations I've heard, and overheard, and innumerable cartoons I've seen, mammography is not high on any woman's list of fun things to do. So maybe the recommendations of the U.S. Preventive Services Task Force (USPSTF) seem more appealing: every other year, starting at age 50, going through age 74.
[Before I go on, allow me a momentary digression into one of my pet peeves in the use of terminology. This is not cancer prevention. Cancer screening does not prevent disease. It may detect it early and make it possible to cure it, thereby preventing a cancer death. We don't know very much about preventing cancer.]
So how are we doing with mammography? Are we saving lives?
Eighteen months ago (5-28-2011) I wrote an essay for this blog on the general subject of preventive medicine and touched briefly on screening mammography. I mentioned a book by Welch and colleagues (Overdiagnosed: Making People Sick in the Pursuit of Health). Now Dr. H. Gilbert Welch is the second author (first author Archie Bleyer) of a paper published in the New England Journal of Medicine, one of the world's leading English-language medical journals. Bleyer and Welch posed this very question. They looked at three decades of data and found a substantial increase in detection, via mammography, of early breast cancer. They did not, however, find a corresponding reduction in the diagnosis of late-stage breast cancer. Specifically, cases detected early more than doubled, while cases diagnosed late declined by about 8%.
Why is that important?
If we've been advocating screening mammography for women over 40, and we say there has been a 28% decline in breast cancer deaths in this group, we might be inclined to put two and two together and say it's working. But if early detection isn't substantially reducing cases not diagnosed until more advanced stages, the logical conclusion would be that improved treatment, not earlier diagnosis, accounts for most of the reduction in mortality.
According to the ACS, two to four out of 1,000 mammograms lead to a diagnosis of cancer. Let's take the middle number (three) and ask what happens to the other 997 patients. They all get a note saying, "Negative again, thanks for choosing Pink Mammography Services, see you next year." Right? Well, not exactly. Some of them have findings that are not so straightforward. Some of them wind up getting additional tests, like ultrasound examination of the breast or MRI. Some of them undergo surgical biopsies. They spend a lot of time worrying about whether they are harboring a life-threatening malignancy before being told, ultimately, that the conclusion is a benign one.
What about the ones who are diagnosed with early breast cancer? Well, they all get treatment (assuming they follow their doctors' advice and recommendations). And each such case represents a breast cancer death prevented. Right? Assuming, that is, that the long-term outcome is that the woman dies from something else. (After all, that is one of the facts of life in a human body: the long-term mortality rate is 100%, sometimes quoted as one per person.)
Well, to be completely honest, we don't know. It is entirely possible that some of these early cases were never going to progress to advanced disease and eventually cause death. And the uncertainty about that was what led Bleyer and Welch to examine more than thirty years' worth of data.
These studies in the realm of epidemiology, public health, and the effects of medical interventions on large populations are difficult to do and more difficult to interpret. But again, the central finding by the authors was that a large increase (137%) in the detection of cases of early breast cancer was accompanied by a decline of only 8% in the rate of detection of late-stage breast cancer. And this suggests that other factors, such as more effective treatment of cases detected at later stages, are playing a substantial role in the reduction of the mortality rate from breast cancer.
One of the things we know about the behavior of some cancers is that there are people who harbor these diseases for many years and ultimately die from something else. Thus it is reasonable to surmise that some women with breast cancer that can be detected by mammography when they have no symptoms would, if never diagnosed, live many more years and go on to die from an unrelated cause. How common is that?
The answer from Bleyer and Welch:
So what do you do as an individual woman? First, and especially if you are between 40 and 50, you have to decide whether to follow the ACS or USPSTF recommendations. Talk to your doctor, and hope he or she really understands the science well enough to answer your questions. If your mammogram is abnormal, no one is going to tell you that you should just wait and see. And these data don't tell us that's a good idea, because we cannot tell which cases detected early will ultimately be a threat to life and which ones will not.
But when we look at these numbers on a population scale, we should ask ourselves what impact screening for early detection is really having on outcomes. After all, everything we do in health care costs money, and the supply is limited. We must always ask ourselves the kinds of tough questions that are answered by what policy wonks call a cost-effectiveness analysis, where the amount of money devoted to something is looked at in terms of dollars per "quality-adjusted life-year" (QALY).
If the patient is you, or a loved one, no amount of money seems too much to save a life. But as a society, we must recognize that resources are finite and figure out where the money will do the most good for the most people. If we're wondering - and we always should be - about the value of screening tests - Bleyer and Welch have given us more to think about.
[Before I go on, allow me a momentary digression into one of my pet peeves in the use of terminology. This is not cancer prevention. Cancer screening does not prevent disease. It may detect it early and make it possible to cure it, thereby preventing a cancer death. We don't know very much about preventing cancer.]
So how are we doing with mammography? Are we saving lives?
Eighteen months ago (5-28-2011) I wrote an essay for this blog on the general subject of preventive medicine and touched briefly on screening mammography. I mentioned a book by Welch and colleagues (Overdiagnosed: Making People Sick in the Pursuit of Health). Now Dr. H. Gilbert Welch is the second author (first author Archie Bleyer) of a paper published in the New England Journal of Medicine, one of the world's leading English-language medical journals. Bleyer and Welch posed this very question. They looked at three decades of data and found a substantial increase in detection, via mammography, of early breast cancer. They did not, however, find a corresponding reduction in the diagnosis of late-stage breast cancer. Specifically, cases detected early more than doubled, while cases diagnosed late declined by about 8%.
Why is that important?
If we've been advocating screening mammography for women over 40, and we say there has been a 28% decline in breast cancer deaths in this group, we might be inclined to put two and two together and say it's working. But if early detection isn't substantially reducing cases not diagnosed until more advanced stages, the logical conclusion would be that improved treatment, not earlier diagnosis, accounts for most of the reduction in mortality.
According to the ACS, two to four out of 1,000 mammograms lead to a diagnosis of cancer. Let's take the middle number (three) and ask what happens to the other 997 patients. They all get a note saying, "Negative again, thanks for choosing Pink Mammography Services, see you next year." Right? Well, not exactly. Some of them have findings that are not so straightforward. Some of them wind up getting additional tests, like ultrasound examination of the breast or MRI. Some of them undergo surgical biopsies. They spend a lot of time worrying about whether they are harboring a life-threatening malignancy before being told, ultimately, that the conclusion is a benign one.
What about the ones who are diagnosed with early breast cancer? Well, they all get treatment (assuming they follow their doctors' advice and recommendations). And each such case represents a breast cancer death prevented. Right? Assuming, that is, that the long-term outcome is that the woman dies from something else. (After all, that is one of the facts of life in a human body: the long-term mortality rate is 100%, sometimes quoted as one per person.)
Well, to be completely honest, we don't know. It is entirely possible that some of these early cases were never going to progress to advanced disease and eventually cause death. And the uncertainty about that was what led Bleyer and Welch to examine more than thirty years' worth of data.
These studies in the realm of epidemiology, public health, and the effects of medical interventions on large populations are difficult to do and more difficult to interpret. But again, the central finding by the authors was that a large increase (137%) in the detection of cases of early breast cancer was accompanied by a decline of only 8% in the rate of detection of late-stage breast cancer. And this suggests that other factors, such as more effective treatment of cases detected at later stages, are playing a substantial role in the reduction of the mortality rate from breast cancer.
One of the things we know about the behavior of some cancers is that there are people who harbor these diseases for many years and ultimately die from something else. Thus it is reasonable to surmise that some women with breast cancer that can be detected by mammography when they have no symptoms would, if never diagnosed, live many more years and go on to die from an unrelated cause. How common is that?
The answer from Bleyer and Welch:
After excluding the transient excess incidence associated with hormone-replacement therapy and adjusting for trends in the incidence of breast cancer among women younger than 40 years of age, we estimated that breast cancer was overdiagnosed (i.e., tumors were detected on screening that would never have led to clinical symptoms) in 1.3 million U.S. women in the past 30 years. We estimated that in 2008, breast cancer was overdiagnosed in more than 70,000 women; this accounted for 31% of all breast cancers diagnosed.Now we have some numbers to ponder. Even if the 1.3 million over three decades, the 70,000 in the year 2008, and the 31% are not exactly right, they should give us pause. At the very least, they should tell us that we need to know much more about how to figure out which early breast cancers really need treatment and which cases may not.
So what do you do as an individual woman? First, and especially if you are between 40 and 50, you have to decide whether to follow the ACS or USPSTF recommendations. Talk to your doctor, and hope he or she really understands the science well enough to answer your questions. If your mammogram is abnormal, no one is going to tell you that you should just wait and see. And these data don't tell us that's a good idea, because we cannot tell which cases detected early will ultimately be a threat to life and which ones will not.
But when we look at these numbers on a population scale, we should ask ourselves what impact screening for early detection is really having on outcomes. After all, everything we do in health care costs money, and the supply is limited. We must always ask ourselves the kinds of tough questions that are answered by what policy wonks call a cost-effectiveness analysis, where the amount of money devoted to something is looked at in terms of dollars per "quality-adjusted life-year" (QALY).
If the patient is you, or a loved one, no amount of money seems too much to save a life. But as a society, we must recognize that resources are finite and figure out where the money will do the most good for the most people. If we're wondering - and we always should be - about the value of screening tests - Bleyer and Welch have given us more to think about.
Saturday, November 17, 2012
Obamacare: The National Road to Where?
In the middle of the 18th century, George Washington was a colonel with the British army, assigned to do something about the competition between the French and the British colonists for the trade with Native Americans. The exchange of manufactured goods desired by the Indians and the fur pelts they could provide in trade was a very profitable business. The British could usually offer better trading deals, but the French had been cultivating the friendship of the Indians much longer. Each saw the other as infringing on its territory, and the Battle of Fort Necessity, in the Laurel Highlands of southwestern Pennsylvania, launched the French & Indian War, a microcosm of the global conflict between the French and British imperial powers, known in Europe as the Seven Years War (1756-1763).
Washington's early experience in the region led him to believe that a good road from the East through the Allegheny Mountains was essential to development of Western lands and expansion of what was to become a new nation. His vision later became the National Road from Cumberland, Maryland to Wheeling, Virginia. (That part became West Virginia during the Civil War, and the National Road later carried travelers much farther west.) Very substantial funding was approved when Thomas Jefferson was president. Jefferson worried that the project would become a great sinkhole into which money would disappear, imagining that every member of Congress would be trying to secure contracts for friends. It is unclear whether Jefferson was the first to worry about profligate federal spending on a "pork barrel" project, but what he wrote at the time was to be mirrored in many criticisms of public spending over the next two centuries.
Alexander Hamilton had a vision of a powerful central government that would collect taxes and spend money, helping to expand the United States with federal subsidies for "internal improvements" such as roads and canals and a strong central bank to foster commerce. This was developed further, in the second quarter of the 19th century, into Henry Clay's "American System."
All along the way, there have been powerful dissenting voices. Just as Hamilton and Jefferson had opposing views on the merits of a strong central government, there have, ever since, been dramatic differences in political philosophy between those who believe in using the power of the federal government to tax and spend to "provide for the general welfare" (the phrase used in the Constitution's description of the powers of Congress) and those who believe most of these important functions should be carried out by the states or left in private hands.
Hamilton and Jefferson never imagined public financing of the nation's system of health care, likely at least in part because two centuries ago health care had relatively little to offer. Louis Pasteur, who deserves much credit for development of the germ theory of disease, wasn't born until 1822, and antibiotics would have to wait another century to begin to cure our ills. In the run-up to the enactment of Obamacare, some students of history pointed to a 1798 law providing for public funding of hospitals to care for sick and disabled sailors, paid for through a tax on private marine merchants.
Whether one can - or should - extrapolate from a system of health care for veterans to a national health service (like the one in Britain) that takes care of everyone is very much an open question. For each veteran of our armed services who praises the VA health care system and relies upon it exclusively, it is easy to find another who is glad to have access to the private system and sees it as vastly superior in quality.
Obamacare does not create a National Health Service.
But there are many among both its proponents and its enemies who see it as the first steps down that National Road.
Some, including your faithful essayist, see it as a move - welcome, but insufficient - toward universal coverage. As you know if you are a regular reader, I regard our status as the only nation in the industrialized West to fail to provide universal coverage for, and universal access to, health care to be a national disgrace.
The new law requires everyone to have health insurance. This is to be accomplished through a hodgepodge of mechanisms, from mandated purchasing (with subsidies for the needy) through health insurance exchanges to expansion of the publicly funded Medicaid system for the more severely needy. But the enforcement mechanism for the mandate is weak, the subsidies are likely to prove inadequate, and the states have been told by the Supreme Court that Congress cannot make them expand Medicaid. So patients, doctors, and hospitals are waiting, none too optimistically, to see how this all plays out.
I may be among the least optimistic. I believe the number of uninsured, now standing at about 50 million, will drop by no more than half in the next decade unless we do much more to change the way health care is financed in this country.
The question remains how we should go about it. Should we have a mix of public and private mechanisms for financing purchase of health care services, such as we have now? If so, how will we cover everyone, when so many will continue to find private insurance unaffordable? If we address that problem by subsidizing the purchase of private insurance very generously, how can we avoid enriching the health insurance industry (and its "fat cat" CEOs and stockholders)? To carry that a step further, is it even possible to expand health insurance to cover everyone without either enriching or eliminating the private health insurance industry? Either we subsidize the purchase of private health insurance so generously that everyone can afford it (and it becomes even more profitable than it is now), or we fail to do so, in which case we must expand public financing so greatly that everyone who can move into the less-costly public system will do so, and the private health insurance industry will serve only the most affluent.
I do not claim to have the answers to these questions. I have opinions about what would work, what would be efficient, and what it would take to ensure high quality. But there are powerful interests opposing change, and vast swaths of the general public stand opposed to change, because they are satisfied with what they have in the current system. If you are not suffering, it is more difficult to see how things could be so much better.
Remember Washington, looking at the precursor to the National Road. Soldiers, though they might curse it, could march along that road. Horses could negotiate it, if not without many a stumble and an occasional fall. Wagons could make it through in good weather, though they were likely to get mired down if it had rained recently. Washington thought it should be wide and smooth. Think about your road to readily accessible, high-quality health care. Have you gotten mired down in that muck? Should we not build a road that is wide and smooth?
Washington's early experience in the region led him to believe that a good road from the East through the Allegheny Mountains was essential to development of Western lands and expansion of what was to become a new nation. His vision later became the National Road from Cumberland, Maryland to Wheeling, Virginia. (That part became West Virginia during the Civil War, and the National Road later carried travelers much farther west.) Very substantial funding was approved when Thomas Jefferson was president. Jefferson worried that the project would become a great sinkhole into which money would disappear, imagining that every member of Congress would be trying to secure contracts for friends. It is unclear whether Jefferson was the first to worry about profligate federal spending on a "pork barrel" project, but what he wrote at the time was to be mirrored in many criticisms of public spending over the next two centuries.
Alexander Hamilton had a vision of a powerful central government that would collect taxes and spend money, helping to expand the United States with federal subsidies for "internal improvements" such as roads and canals and a strong central bank to foster commerce. This was developed further, in the second quarter of the 19th century, into Henry Clay's "American System."
All along the way, there have been powerful dissenting voices. Just as Hamilton and Jefferson had opposing views on the merits of a strong central government, there have, ever since, been dramatic differences in political philosophy between those who believe in using the power of the federal government to tax and spend to "provide for the general welfare" (the phrase used in the Constitution's description of the powers of Congress) and those who believe most of these important functions should be carried out by the states or left in private hands.
Hamilton and Jefferson never imagined public financing of the nation's system of health care, likely at least in part because two centuries ago health care had relatively little to offer. Louis Pasteur, who deserves much credit for development of the germ theory of disease, wasn't born until 1822, and antibiotics would have to wait another century to begin to cure our ills. In the run-up to the enactment of Obamacare, some students of history pointed to a 1798 law providing for public funding of hospitals to care for sick and disabled sailors, paid for through a tax on private marine merchants.
Whether one can - or should - extrapolate from a system of health care for veterans to a national health service (like the one in Britain) that takes care of everyone is very much an open question. For each veteran of our armed services who praises the VA health care system and relies upon it exclusively, it is easy to find another who is glad to have access to the private system and sees it as vastly superior in quality.
Obamacare does not create a National Health Service.
But there are many among both its proponents and its enemies who see it as the first steps down that National Road.
Some, including your faithful essayist, see it as a move - welcome, but insufficient - toward universal coverage. As you know if you are a regular reader, I regard our status as the only nation in the industrialized West to fail to provide universal coverage for, and universal access to, health care to be a national disgrace.
The new law requires everyone to have health insurance. This is to be accomplished through a hodgepodge of mechanisms, from mandated purchasing (with subsidies for the needy) through health insurance exchanges to expansion of the publicly funded Medicaid system for the more severely needy. But the enforcement mechanism for the mandate is weak, the subsidies are likely to prove inadequate, and the states have been told by the Supreme Court that Congress cannot make them expand Medicaid. So patients, doctors, and hospitals are waiting, none too optimistically, to see how this all plays out.
I may be among the least optimistic. I believe the number of uninsured, now standing at about 50 million, will drop by no more than half in the next decade unless we do much more to change the way health care is financed in this country.
The question remains how we should go about it. Should we have a mix of public and private mechanisms for financing purchase of health care services, such as we have now? If so, how will we cover everyone, when so many will continue to find private insurance unaffordable? If we address that problem by subsidizing the purchase of private insurance very generously, how can we avoid enriching the health insurance industry (and its "fat cat" CEOs and stockholders)? To carry that a step further, is it even possible to expand health insurance to cover everyone without either enriching or eliminating the private health insurance industry? Either we subsidize the purchase of private health insurance so generously that everyone can afford it (and it becomes even more profitable than it is now), or we fail to do so, in which case we must expand public financing so greatly that everyone who can move into the less-costly public system will do so, and the private health insurance industry will serve only the most affluent.
I do not claim to have the answers to these questions. I have opinions about what would work, what would be efficient, and what it would take to ensure high quality. But there are powerful interests opposing change, and vast swaths of the general public stand opposed to change, because they are satisfied with what they have in the current system. If you are not suffering, it is more difficult to see how things could be so much better.
Remember Washington, looking at the precursor to the National Road. Soldiers, though they might curse it, could march along that road. Horses could negotiate it, if not without many a stumble and an occasional fall. Wagons could make it through in good weather, though they were likely to get mired down if it had rained recently. Washington thought it should be wide and smooth. Think about your road to readily accessible, high-quality health care. Have you gotten mired down in that muck? Should we not build a road that is wide and smooth?
We should. And we must.
Friday, November 9, 2012
Legalization of Marijuana
Earlier this week voters in Colorado and Washington state approved ballot measures legalizing recreational use of marijuana. While similar proposals have fallen short of approval in California, it is reasonable to surmise that other states will follow, and a trend will emerge. Supporters are applauding the end of "prohibition," likening this development to the adoption of the 21st Amendment to the United States Constitution in 1933, repealing the 18th Amendment (1920).
As you know if you read my essay "Temperance and the Addict" last June, I have an academic interest in the use of mood-altering substances, and the effects of changing the status of marijuana from illegal to legal will surely be fascinating to observe.
["Medical marijuana," by the way, is a subject to which I haven't paid much attention, because I think the scientific literature on that is more confusing than enlightening, and I have been inclined to agree with those who perceived the movement to legalize the drug for medicinal purposes to be nothing more than a smokescreen (pun intended) for legitimizing recreational use.]
In the practice of emergency medicine I frequently see the consequences of the choices people make about the use and abuse of tobacco and alcohol. Much has been written comparing marijuana with those two legal drugs. Even an overview of those comparisons would take a book chapter rather than an essay. For example, the long-term health effects of tobacco and alcohol are well understood. Marijuana, not so much.
A column ("The End of the War on Marijuana") published at CNN.com caught my eye early this morning. The writer, Roger Roffman, is a professor emeritus of social work at the University of Washington and a supporter of the legalization measure in that state. Roffman made two observations that I found especially noteworthy:
One of the aspects of substance use that I often think about is direct cost to the consumer. While the price of a pack of cigarettes varies with state taxes, not to mention choice of brand versus generic or purchase by the pack or carton, I have a pretty good idea what people are spending on tobacco when they tell me how much they smoke. I cannot help being amazed that some people whose resources are very limited choose to spend thousands of dollars a year on cigarettes.
It was my sense that prices for marijuana are quite variable, and a Web search for information proved that to be correct. I found a wide range of reported "retail prices," mostly between $200 and $1,000 per ounce. Knowing next to nothing about smoking the stuff, I did more research to discover that people typically roll joints containing anywhere from one half to one gram each, which means one might get anywhere from 30-60 joints from an ounce. At $300/ounce, a joint might cost $5 to $10. This was an eye opener for me. I had figured marijuana was much more expensive than tobacco per cigarette, but I didn't realize one joint cost as much as a pack of cigarettes.
Of course, users don't typically smoke 20 joints per day. As I browsed the Web, the self-reported patterns I found for regular users were mostly in the range of 1-5 grams per day, with one person claiming 15 grams per day when he had large amounts of money (and could afford to spend large amounts of time completely disconnected from reality).
So the bottom line on the expense for regular users seems to be about the same as for a pack-a-day cigarette smoker at the low end (one joint per day) to much more than that (perhaps $50/day) for heavy users. I'm guessing the heavier users likely are able to get better prices by going up the supply chain and paying wholesale prices. Smoke what you want and sell the rest.
We know a lot about the damaging effects of smoking tobacco on health: heart attack, stroke, peripheral arterial disease, COPD, and lung cancer, just to name the most common problems. We know far less about what regular heavy smoking of marijuana does to the lungs, but I think we are likely to have much more data in the years to come.
We also know a good deal about the harmful effects of alcohol, including disease of the liver and other organs in the digestive system as well as the brain. In emergency medicine, while we see plenty of that, we see a great deal of trouble caused by acute intoxication, especially motor vehicle crashes.
So I wanted to find out how smoking a reefer might compare with alcohol consumption for getting behind the wheel. As you might imagine, the data are not abundant (yet), but here is what the National Highway Traffic Safety Administration (NHTSA) says:
One news story about the new law in Colorado said the state expected, through the regulation of sale and the imposition of taxes, to bring about $60 million a year into the state treasury, while saving about $75 million in costs to the penal system associated with the criminalization of sale, possession, and use of marijuana. That's a lot of money going to the state's bottom line.
The obvious question: will it be worth it? The answer: we don't know, because we have no way to calculate the cost of much more widespread use resulting from its effects on health and behavior. If the record of tobacco and alcohol is even modestly instructive, we may be in for some rude surprises. As my regular readers know, I think we do an awful lot of things in our society as if we have never heard of the Law of Unintended Consequences.
As you know if you read my essay "Temperance and the Addict" last June, I have an academic interest in the use of mood-altering substances, and the effects of changing the status of marijuana from illegal to legal will surely be fascinating to observe.
["Medical marijuana," by the way, is a subject to which I haven't paid much attention, because I think the scientific literature on that is more confusing than enlightening, and I have been inclined to agree with those who perceived the movement to legalize the drug for medicinal purposes to be nothing more than a smokescreen (pun intended) for legitimizing recreational use.]
In the practice of emergency medicine I frequently see the consequences of the choices people make about the use and abuse of tobacco and alcohol. Much has been written comparing marijuana with those two legal drugs. Even an overview of those comparisons would take a book chapter rather than an essay. For example, the long-term health effects of tobacco and alcohol are well understood. Marijuana, not so much.
A column ("The End of the War on Marijuana") published at CNN.com caught my eye early this morning. The writer, Roger Roffman, is a professor emeritus of social work at the University of Washington and a supporter of the legalization measure in that state. Roffman made two observations that I found especially noteworthy:
Far too many teens think smoking pot is "no big deal," greatly underestimating the risk of being derailed from social, psychological and educational attainment. Far too many adults don't take seriously enough the risk of marijuana dependence that accompanies very frequent use.Roffman has written books on medicinal marijuana and the treatment of marijuana dependence, and his next book (titled A Marijuana Memoir) is sure to be interesting. It's my sense that he knows his subject well, and so his observations worry me more than a little.
One of the aspects of substance use that I often think about is direct cost to the consumer. While the price of a pack of cigarettes varies with state taxes, not to mention choice of brand versus generic or purchase by the pack or carton, I have a pretty good idea what people are spending on tobacco when they tell me how much they smoke. I cannot help being amazed that some people whose resources are very limited choose to spend thousands of dollars a year on cigarettes.
It was my sense that prices for marijuana are quite variable, and a Web search for information proved that to be correct. I found a wide range of reported "retail prices," mostly between $200 and $1,000 per ounce. Knowing next to nothing about smoking the stuff, I did more research to discover that people typically roll joints containing anywhere from one half to one gram each, which means one might get anywhere from 30-60 joints from an ounce. At $300/ounce, a joint might cost $5 to $10. This was an eye opener for me. I had figured marijuana was much more expensive than tobacco per cigarette, but I didn't realize one joint cost as much as a pack of cigarettes.
Of course, users don't typically smoke 20 joints per day. As I browsed the Web, the self-reported patterns I found for regular users were mostly in the range of 1-5 grams per day, with one person claiming 15 grams per day when he had large amounts of money (and could afford to spend large amounts of time completely disconnected from reality).
So the bottom line on the expense for regular users seems to be about the same as for a pack-a-day cigarette smoker at the low end (one joint per day) to much more than that (perhaps $50/day) for heavy users. I'm guessing the heavier users likely are able to get better prices by going up the supply chain and paying wholesale prices. Smoke what you want and sell the rest.
We know a lot about the damaging effects of smoking tobacco on health: heart attack, stroke, peripheral arterial disease, COPD, and lung cancer, just to name the most common problems. We know far less about what regular heavy smoking of marijuana does to the lungs, but I think we are likely to have much more data in the years to come.
We also know a good deal about the harmful effects of alcohol, including disease of the liver and other organs in the digestive system as well as the brain. In emergency medicine, while we see plenty of that, we see a great deal of trouble caused by acute intoxication, especially motor vehicle crashes.
So I wanted to find out how smoking a reefer might compare with alcohol consumption for getting behind the wheel. As you might imagine, the data are not abundant (yet), but here is what the National Highway Traffic Safety Administration (NHTSA) says:
Marijuana has been shown to impair performance on driving simulator tasks and on open and closed driving courses for up to approximately 3 hours. Decreased car handling performance, increased reaction times, impaired time and distance estimation, inability to maintain headway, lateral travel, subjective sleepiness, motor incoordination, and impaired sustained vigilance have all been reported. Some drivers may actually be able to improve performance for brief periods by overcompensating for self-perceived impairment. The greater the demands placed on the driver, however, the more critical the likely impairment. Marijuana may particularly impair monotonous and prolonged driving. Decision times to evaluate situations and determine appropriate responses increase. Mixing alcohol and marijuana may dramatically produce effects greater than either drug on its own.I must admit I find this more than a little scary if legalization makes smoking and driving anywhere near as common as drinking and driving is now. And drinking and driving would likely be much more common if it were not for fairly strict enforcement of laws against that. Will we have laws against driving under the influence of marijuana in all the states where recreational use is legalized? How will such laws be enforced? There is no recognized field sobriety test or breath test, and the correlation between blood levels and clinical effects is very uncertain.
One news story about the new law in Colorado said the state expected, through the regulation of sale and the imposition of taxes, to bring about $60 million a year into the state treasury, while saving about $75 million in costs to the penal system associated with the criminalization of sale, possession, and use of marijuana. That's a lot of money going to the state's bottom line.
The obvious question: will it be worth it? The answer: we don't know, because we have no way to calculate the cost of much more widespread use resulting from its effects on health and behavior. If the record of tobacco and alcohol is even modestly instructive, we may be in for some rude surprises. As my regular readers know, I think we do an awful lot of things in our society as if we have never heard of the Law of Unintended Consequences.
Friday, November 2, 2012
The Electoral College: Why or Why Not?
Every four years when we go to the polls to vote for a presidential candidate, most of us are dimly aware that we are really voting for presidential electors and that weeks later they will meet as the Electoral College and cast their ballots. We don't think too much about it, because the Electoral College usually reflects the will of the American people as expressed in the popular vote totals.
But there is always the possibility that the popular vote and the Electoral College vote will go in opposite directions. Many say that was the case in 2000, when by most accounts Al Gore received more nationwide popular votes than did George W. Bush, who won the Electoral College (after the dispute over Florida's vote was settled by the U.S. Supreme Court). There are enough doubts about the popular vote totals, including such questions as counting of absentee ballots, that the picture for 2000 is not entirely clear. Suffice it to say that it was a very close popular vote, and by that measure the winner may well have been Mr. Gore.
But that isn't what determines the winner. Instead, the popular vote of each state determines the votes of its electors (except Nebraska and Maine, which split their votes if the statewide winner does not also win each congressional district). And that's why it is possible to win the nationwide popular vote but not the Electoral College vote (or the reverse).
There are 538 votes in the Electoral College, and a candidate needs a majority (meaning 270) to win. So, imagine that the vote is 270-268. Imagine further that the candidate with 270 votes won his states with slim majorities, but the candidate with 268 votes won his states by landslide votes. Obviously, the "losing" candidate would then have a hefty popular vote majority. While that sort of thing has never happened, it is quite common for popular vote and electoral vote majorities to be widely disparate. For example, in 1984, Ronald Reagan won 58.8% of the popular vote and 97.6% of the electoral votes. In 1968, Richard Nixon defeated Hubert Humphrey by a mere 0.7% in the popular vote, while the electoral vote was 301-191 (with 46 for third-party candidate George Wallace).
If you are a serious numbers cruncher (like the folks at - where else? - MIT), you could do the math and find out that two candidates could achieve an approximate tie in the popular vote, while achieving an exact tie in the electoral vote, or one could win 538-0, or anywhere in between.
This year both the electoral vote and the popular vote could be very close, and they could easily go in opposite directions. So that tells us the first objection to the Electoral College: why should we have a system in which the nationwide popular vote does not determine the winner? That's how it is for the United States House and Senate - at least since we adopted the 17th Amendment to the Constitution and decided to elect senators directly instead of through our state legislatures.
The origin of the Electoral College is quite simple. Like the composition of the Congress, it was based on a compromise between the more populous and less populous states. Delegates to the Constitutional Convention from the less populous states were afraid that the new Congress would be controlled by representatives from the larger states. The compromise was that representation in the House would be proportional to population, while in the Senate, all states would be equal, with two senators each. The Electoral College is a blend, the number of electors being equal to the number of representatives plus the number of senators. Thus the least populous states (Wyoming, for example), with only one representative in the House, have three electors (because they have two senators, like all states).
Looking at it this way, small (in population) states are "overrepresented" in the Electoral College. Wyoming has 3 electors; if California, the most populous state, had a number of electors corresponding to population ratio (relative to Wyoming), it would have 199 instead of 55. Of course that is the extreme spread, and most of the overrepresentation for states with fewer people is not so dramatic. But this overrepresentation is perceived as a violation of the "one person, one vote" principle and therefore antidemocratic.
The Electoral College has other antidemocratic effects. Some would say that elections bring the opportunity to connect the candidates with the voters, and that connection is important to the expression of the will of the people. Ask anyone who has been in Iowa or New Hampshire at the beginning of the primary season about a sense of direct connection. (For most of us, there is little sense of such connection. The only time I met a presidential candidate in person, he wasn't even a candidate yet. It was 1964, at the Democratic National Convention in Atlantic City, and the candidate (for 1968) was Bobby Kennedy.) But what is the effect of the Electoral College?
Do you think Obama and Romney are spending much time in California or New York, where the polls show Obama with huge leads? No, they are in "battleground states," where the polling is close. You want to see these candidates? Live in a 50-50 state with plenty of electoral votes. (My personal view is thanks but no thanks, because it really fouls up traffic when they are in town.) So big states and small states can be equally disadvantaged. The candidates ignore New York and California, not to mention Texas and Illinois, every bit as much as they ignore Wyoming or the Dakotas.
A recent article on CNN.com noted that people in Hawaii don't vote in presidential elections - at least not as much as people in the other 49 states. In 2008, voter turnout there was 48.8%, compared to Minnesota, at the top of the heap, with 77.8%. Say what you will about the "beach bum" mentality some residents of the Aloha State may have, I think the reason is very simple. It's difficult to find the motivation to vote when it doesn't matter. Hawaii has few electoral votes, they almost always go to the more liberal of our major parties, and because of the time zone, the race has very often been projected before the polls close there.
How likely are you to vote if you think it really doesn't matter? I live in Pennsylvania, where the polls are close this year. But I can tell you it's really tough to find the motivation to vote in the primary, because the races for the presidential nominations have almost always been settled by late spring, when ours is scheduled. If you need to feel like it matters, are you more likely to vote if you live in Ohio, and you keep hearing about how close the race is and how no Republican has ever won the White House without winning Ohio? Or New York, where polling shows Obama with a 25-point lead?
If the Electoral College discourages people who don't live in "battleground states" from going to the polls, that is a bad thing. If abolishing it would have the opposite effect, maybe that's worth some serious consideration.
Remember, that would require a constitutional amendment. After passing both houses of Congress, it would have to be ratified by 38 state legislatures. Won't 13 or more of the least populous states vote against it to preserve their overrepresentation? Yep. It's a long shot.
But there is always the possibility that the popular vote and the Electoral College vote will go in opposite directions. Many say that was the case in 2000, when by most accounts Al Gore received more nationwide popular votes than did George W. Bush, who won the Electoral College (after the dispute over Florida's vote was settled by the U.S. Supreme Court). There are enough doubts about the popular vote totals, including such questions as counting of absentee ballots, that the picture for 2000 is not entirely clear. Suffice it to say that it was a very close popular vote, and by that measure the winner may well have been Mr. Gore.
But that isn't what determines the winner. Instead, the popular vote of each state determines the votes of its electors (except Nebraska and Maine, which split their votes if the statewide winner does not also win each congressional district). And that's why it is possible to win the nationwide popular vote but not the Electoral College vote (or the reverse).
There are 538 votes in the Electoral College, and a candidate needs a majority (meaning 270) to win. So, imagine that the vote is 270-268. Imagine further that the candidate with 270 votes won his states with slim majorities, but the candidate with 268 votes won his states by landslide votes. Obviously, the "losing" candidate would then have a hefty popular vote majority. While that sort of thing has never happened, it is quite common for popular vote and electoral vote majorities to be widely disparate. For example, in 1984, Ronald Reagan won 58.8% of the popular vote and 97.6% of the electoral votes. In 1968, Richard Nixon defeated Hubert Humphrey by a mere 0.7% in the popular vote, while the electoral vote was 301-191 (with 46 for third-party candidate George Wallace).
If you are a serious numbers cruncher (like the folks at - where else? - MIT), you could do the math and find out that two candidates could achieve an approximate tie in the popular vote, while achieving an exact tie in the electoral vote, or one could win 538-0, or anywhere in between.
This year both the electoral vote and the popular vote could be very close, and they could easily go in opposite directions. So that tells us the first objection to the Electoral College: why should we have a system in which the nationwide popular vote does not determine the winner? That's how it is for the United States House and Senate - at least since we adopted the 17th Amendment to the Constitution and decided to elect senators directly instead of through our state legislatures.
The origin of the Electoral College is quite simple. Like the composition of the Congress, it was based on a compromise between the more populous and less populous states. Delegates to the Constitutional Convention from the less populous states were afraid that the new Congress would be controlled by representatives from the larger states. The compromise was that representation in the House would be proportional to population, while in the Senate, all states would be equal, with two senators each. The Electoral College is a blend, the number of electors being equal to the number of representatives plus the number of senators. Thus the least populous states (Wyoming, for example), with only one representative in the House, have three electors (because they have two senators, like all states).
Looking at it this way, small (in population) states are "overrepresented" in the Electoral College. Wyoming has 3 electors; if California, the most populous state, had a number of electors corresponding to population ratio (relative to Wyoming), it would have 199 instead of 55. Of course that is the extreme spread, and most of the overrepresentation for states with fewer people is not so dramatic. But this overrepresentation is perceived as a violation of the "one person, one vote" principle and therefore antidemocratic.
The Electoral College has other antidemocratic effects. Some would say that elections bring the opportunity to connect the candidates with the voters, and that connection is important to the expression of the will of the people. Ask anyone who has been in Iowa or New Hampshire at the beginning of the primary season about a sense of direct connection. (For most of us, there is little sense of such connection. The only time I met a presidential candidate in person, he wasn't even a candidate yet. It was 1964, at the Democratic National Convention in Atlantic City, and the candidate (for 1968) was Bobby Kennedy.) But what is the effect of the Electoral College?
Do you think Obama and Romney are spending much time in California or New York, where the polls show Obama with huge leads? No, they are in "battleground states," where the polling is close. You want to see these candidates? Live in a 50-50 state with plenty of electoral votes. (My personal view is thanks but no thanks, because it really fouls up traffic when they are in town.) So big states and small states can be equally disadvantaged. The candidates ignore New York and California, not to mention Texas and Illinois, every bit as much as they ignore Wyoming or the Dakotas.
A recent article on CNN.com noted that people in Hawaii don't vote in presidential elections - at least not as much as people in the other 49 states. In 2008, voter turnout there was 48.8%, compared to Minnesota, at the top of the heap, with 77.8%. Say what you will about the "beach bum" mentality some residents of the Aloha State may have, I think the reason is very simple. It's difficult to find the motivation to vote when it doesn't matter. Hawaii has few electoral votes, they almost always go to the more liberal of our major parties, and because of the time zone, the race has very often been projected before the polls close there.
How likely are you to vote if you think it really doesn't matter? I live in Pennsylvania, where the polls are close this year. But I can tell you it's really tough to find the motivation to vote in the primary, because the races for the presidential nominations have almost always been settled by late spring, when ours is scheduled. If you need to feel like it matters, are you more likely to vote if you live in Ohio, and you keep hearing about how close the race is and how no Republican has ever won the White House without winning Ohio? Or New York, where polling shows Obama with a 25-point lead?
If the Electoral College discourages people who don't live in "battleground states" from going to the polls, that is a bad thing. If abolishing it would have the opposite effect, maybe that's worth some serious consideration.
Remember, that would require a constitutional amendment. After passing both houses of Congress, it would have to be ratified by 38 state legislatures. Won't 13 or more of the least populous states vote against it to preserve their overrepresentation? Yep. It's a long shot.
Subscribe to:
Posts (Atom)