Last month, in the final weeks, and then days, preceding the presidential election, the pundits were talking about how close the race seemed to be. They were intently focused on the "battleground states." They worried (or were they really rubbing their hands together in eager anticipation?) that some of the state vote tallies might be so close as to trigger automatic recounts. And that could lead to legal challenges. There were echoes of Florida in 2000. It didn't turn out that way, probably to the disappointment of some journalists (and maybe lawyers) and the relief of everyone else.
But it brought to mind conversations I'd had during the six weeks or so of the contested election of 2000, that period between Election Day and the U.S. Supreme Court's ruling. Perhaps, I said to the medical students and residents I was supervising in the emergency department, neither Bush nor Gore should want to be the winner, because of the tragic history of presidents elected in years ending in zero.
I noted that the last president elected in a year ending in zero, Ronald Reagan (1980), had been shot soon after taking office and come perilously close to death. Who, I asked them, was the last president before Reagan to be elected in a year ending in zero and not die in office?
I didn't expect anyone to know the answer straight away. I wanted to see how they would approach the question, what they knew of presidential history, which I would be able to tell if they tried to work their way back through the years that were multiples of 20 - the presidential election years ending in zero.
Nearly all (I was aghast that it was not all) knew that John F. Kennedy was elected in 1960 and assassinated. But things went downhill quickly from there. Most did not know that Franklin Roosevelt was elected in 1940 (and, of course, also 1932, 1936, and 1944) and died in office (in the spring of 1945, from a cerebral hemorrhage), to be succeeded by Harry Truman.
As I recall, exactly no one knew that Warren Harding was elected in 1920, died in office, and was succeeded by Vice President Calvin Coolidge. I believe one person knew William McKinley was elected in 1900, assassinated by an anarchist, and succeeded by Teddy Roosevelt.
1880. James Garfield was shot in 1881 by a psychotic man who believed Garfield should have recognized the work he'd done on behalf of the presidential campaign (which was trivial) and appointed him to an important job. The bullet wound was actually not that bad - nowhere near as serious as the one that almost took the life of Ronald Reagan. Garfield was more a victim of the terrible medical care he received. No one knew that. I can't say I was surprised, but I was disappointed just the same, if only because the medical part of the story was so important.
Truly appalling, however, was that not everyone knew 1860 was the year of Lincoln's first election, although of course everyone knew he'd been assassinated.
1840 offers yet another especially interesting piece of presidential history. William Henry Harrison was elected. In those days inauguration was on March 4th. (It didn't move into January until FDR was president. The nation decided that the wait until nearly spring for Roosevelt's inauguration had been too long, given the urgency of addressing the economic woes of the Great Depression. So 1933 was the last year it was March 4th, and it was then moved to January 20th). Harrison gave his inaugural address outdoors. The weather in early March in Washington, D.C. is usually mild, but in 1841 it was not. Harrison did not wear an overcoat. A month later he died of pneumonia. Causality, of course, is open to question, but the result was that Harrison became the first U.S. president to die in office.
No. That's the answer to your question. Not a single one of the young medical trainees knew anything about Harrison or his vice president, John Tyler. What a shame. Because it's a fascinating story.
Harrison, you see, was not a politician, but a war hero, best remembered for the Battle of Tippecanoe. (Remember the campaign slogan, "Tippecanoe and Tyler, too?") He was recruited by the Whig Party, which was desperate to win the White House. The Whig party was opposed to just about everything Andrew Jackson (Democratic president elected in 1828 and 1832) stood for, but they had learned from dealing with the hero of the Battle of New Orleans (1815) just how popular war heroes can be. Harrison was perfectly willing to run on the Whig party platform, which was very short on detail, as he had no fixed political principles of his own. The Whigs then needed a candidate for the #2 spot on the ticket. Harrison was from Ohio, so they looked south of the Mason-Dixon line for geographic balance and asked John Tyler of Virginia. Tyler was a Democrat, but apparently not a loyal Democrat, as he agreed to run for VP as a Whig.
Then Harrison died, and Tyler assumed the office. This had never happened before, and the Constitution was not entirely clear on how it should work. The Constitution said that in the event of the death of the president, the "powers and duties" of the office "shall devolve on the Vice President." What wasn't clear, however, was whether the VP actually became president, with all of the accoutrements of the office, or whether he was just the acting president.
It probably wouldn't have been a big deal if Tyler had been a loyal Whig. But not only was he really, after all, a Virginia Democrat who differed with the Whig party's principles in important ways, but he started doing something earlier presidents had generally not done: vetoing bills passed by Congress because he didn't like them.
[Before Tyler, presidents typically vetoed a bill only if they could plausibly contend it was unconstitutional. Nowadays we think of it as the Supreme Court's job to address such questions (which it does only if a challenge is brought before it), established very early in the 1800s by Chief Justice John Marshall as the doctrine of judicial review, and presidents routinely veto bills with which they disagree. But the president is sworn to uphold the constitution and certainly shouldn't sign into law a bill he thinks is unconstitutional.]
Congressional Whigs were livid, and came very close to mustering the votes to impeach Tyler. They refused to call him the president, referring to him as the acting president - or "His Accidency."
No one I asked knew any of this. Nor did they know that our fifth president, James Monroe, had been elected in 1816 and 1820 and served his two terms in full, the last president before Reagan to be elected in a year ending in zero and survive to the end of his elected tenure.
So I did not expect anyone to respond to my initial question by saying, "James Monroe!" I just wanted to see what they knew of presidential history. Damned little, as it turned out.
Far too few of us show much interest in history in grade school or college, and I'm sure it's at least partly because there are too few history teachers who make the subject interesting, as some of mine did. This is a terrible shame. American history is fascinating, and studying presidential history is a wonderful way to approach it. Thomas Carlyle said, "The history of the world is but the biography of great men." We've had forty-three presidents (Grover Cleveland, who served two non-continuous terms, is counted twice to get to #44 for Barack Obama). The history of our nation is most certainly the history of these men, especially if you read the sort of expansive biography - the man and his times - that is the kind really worth reading.
I tell residents who train at our institution that there is more to learn in the emergency department than medicine. A sense of history is something I hope to give each of them, at least a little bit, by the time they've completed the three years I have to influence their lives.
Thursday, December 27, 2012
Tuesday, December 25, 2012
Soak the Rich
On Christmas Day, we are filled with thoughts of charity toward our fellow man. We also think more about all things spiritual, such as the immortality of the soul and the afterlife. Those who believe in Heaven have all heard that it is easier for a camel to fit through the eye of a needle than for the rich to gain entrance to Heaven.
Pastor Rick Warren was shown, in a video clip I saw several times on CNN yesterday, saying that it is not a sin to be rich, but it is a sin to die rich. That is a fine argument for philanthropy.
And so, when CNBC's Website yesterday ran an article (titled "Majority of Rich Want Themselves Taxed More") about how the well-to-do say they are willing to pay higher taxes, it seemed entirely in keeping with the Christmas spirit.
Sadly, but not surprisingly, however, reading beyond the headline revealed that this was just another example in an unending stream of shoddy journalism.
The article cited a poll done by American Express Publishing and The Harrison Group showing that "67% of the top one percent support higher income taxes." More specifically, "more than half" support higher taxes on those making more than $500,000 per year. (In case you're wondering, estimates of what household income places you in the top one percent range from about $400,000 to over $500,000.)
"More than half" of the top one percent? That's a very interesting breakpoint. If you do a bit of research, there is a very substantial difference in both income and wealth
between the bottom and top halves of that top one percent.
The bottom half includes professionals like doctors and lawyers who are comfortably upper middle class but not truly wealthy.
The top half of that one percent - and especially the top 0.1% of all taxpayers - is where the real concentration of wealth is found.
The CNBC article goes on to tell us that "Other recent surveys show that the wealthy support higher taxes as part of a balanced solution to the government debt problem that includes spending cuts."
A "balanced solution" must include spending cuts? That's what we've been told. How about some arithmetic? I know, it's Christmas Day, and you'd like to give your brain a rest, but don't worry, I've done the arithmetic, so all you have to do is read it.
The top one percent represents about 1.4 million taxpayers, with an average taxed income of about $1.3 million. If we were to add a 4% surcharge (which is what the president has proposed) to the top marginal tax rate on the amount of income taxed at the top rate (which means the income above about $180,000/year), that would generate about $62.72 billion in additional revenue. The federal budget deficit for the current fiscal year is about $1.372 trillion. So this surcharge on the top one percent of taxpayers would reduce the budget deficit by about 4.6%.
The House Republicans don't want higher taxes on anyone, but they have suggested they could go for a cut-off of $1 million, while the president is seeking higher taxes on incomes above $250K.
You have to be careful about believing the numbers you read. The National Center for Policy Analysis, which espouses free-market ideals and doesn't like taxes or government regulation, says the proposed higher taxes on those with incomes over $250K would bring in an additional $42 billion. That cannot be correct, because that proposal would mean higher taxes on a lot more people, so it would have to raise revenues by more than the $62.72 billion I estimated for the top 1%, and the NCPA does not claim to have taken reduced economic growth (which economists say would result from higher taxes) into account.
Do I claim my numbers are accurate? Of course not. First of all, I took only a year of economics in college, and I've been only a casual student of economics in the 35 years since I graduated. Second, calculations made by even the most erudite economic theorists rest on a set of assumptions, any of which could turn out to be wrong.
The point, however, remains: we cannot tax our way out of these enormous deficits. We must dramatically reduce government spending. The "soak the rich" approach may make those of us in the other 99% feel better, and at least some of the wealthy seem to be eager to pay higher taxes.
(If you haven't heard Warren Buffett's preaching about this, you've been hiding under a really big rock.)
We all have different ideas about how to cut spending, from getting out of wars and ceasing to be the world's policeman to cutting off entitlement programs for those who are too lazy to be productive citizens and contribute to society instead of sponging.
The point is that if we don't soon have the kind of political leadership it will take to build national consensus about how to reduce government spending, the drag on our economy created by spectacular annual deficits and mind-boggling national debt will cause the most powerful economic engine in the history of the world to grind to a halt.
Pastor Rick Warren was shown, in a video clip I saw several times on CNN yesterday, saying that it is not a sin to be rich, but it is a sin to die rich. That is a fine argument for philanthropy.
And so, when CNBC's Website yesterday ran an article (titled "Majority of Rich Want Themselves Taxed More") about how the well-to-do say they are willing to pay higher taxes, it seemed entirely in keeping with the Christmas spirit.
Sadly, but not surprisingly, however, reading beyond the headline revealed that this was just another example in an unending stream of shoddy journalism.
The article cited a poll done by American Express Publishing and The Harrison Group showing that "67% of the top one percent support higher income taxes." More specifically, "more than half" support higher taxes on those making more than $500,000 per year. (In case you're wondering, estimates of what household income places you in the top one percent range from about $400,000 to over $500,000.)
"More than half" of the top one percent? That's a very interesting breakpoint. If you do a bit of research, there is a very substantial difference in both income and wealth
between the bottom and top halves of that top one percent.
The bottom half includes professionals like doctors and lawyers who are comfortably upper middle class but not truly wealthy.
The top half of that one percent - and especially the top 0.1% of all taxpayers - is where the real concentration of wealth is found.
An understanding of the big difference between the lower and upper halves of that top one percent makes it easy to understand how half of this group could be supportive of higher taxes on themselves (because they can easily afford it), while the other half are not so keen. And that explains why more than 60% of the top 1% say they "carry an unfair burden" in the amount of their income they pay in taxes, which doesn't sound as though they would be at all supportive of taxing themselves more.
The CNBC article goes on to tell us that "Other recent surveys show that the wealthy support higher taxes as part of a balanced solution to the government debt problem that includes spending cuts."
A "balanced solution" must include spending cuts? That's what we've been told. How about some arithmetic? I know, it's Christmas Day, and you'd like to give your brain a rest, but don't worry, I've done the arithmetic, so all you have to do is read it.
The top one percent represents about 1.4 million taxpayers, with an average taxed income of about $1.3 million. If we were to add a 4% surcharge (which is what the president has proposed) to the top marginal tax rate on the amount of income taxed at the top rate (which means the income above about $180,000/year), that would generate about $62.72 billion in additional revenue. The federal budget deficit for the current fiscal year is about $1.372 trillion. So this surcharge on the top one percent of taxpayers would reduce the budget deficit by about 4.6%.
The House Republicans don't want higher taxes on anyone, but they have suggested they could go for a cut-off of $1 million, while the president is seeking higher taxes on incomes above $250K.
The NCPA needs a reality check itself! |
Do I claim my numbers are accurate? Of course not. First of all, I took only a year of economics in college, and I've been only a casual student of economics in the 35 years since I graduated. Second, calculations made by even the most erudite economic theorists rest on a set of assumptions, any of which could turn out to be wrong.
The point, however, remains: we cannot tax our way out of these enormous deficits. We must dramatically reduce government spending. The "soak the rich" approach may make those of us in the other 99% feel better, and at least some of the wealthy seem to be eager to pay higher taxes.
(If you haven't heard Warren Buffett's preaching about this, you've been hiding under a really big rock.)
We all have different ideas about how to cut spending, from getting out of wars and ceasing to be the world's policeman to cutting off entitlement programs for those who are too lazy to be productive citizens and contribute to society instead of sponging.
The point is that if we don't soon have the kind of political leadership it will take to build national consensus about how to reduce government spending, the drag on our economy created by spectacular annual deficits and mind-boggling national debt will cause the most powerful economic engine in the history of the world to grind to a halt.
Saturday, December 22, 2012
A Culture of Violence
Eight days ago a disturbed 20-year-old man visited unimaginable horror upon an idyllic town in New England. Twenty children and six adults at an elementary school in Newtown, Connecticut fell victim to this violent attack.
Much attention has been focused on the implements used in such mass murders, especially a semiautomatic rifle chambered in a caliber called .223 Remington, also known as 5.56mm NATO. The latter designation has drawn particular note, and such a rifle is often described in news reports as a "military style assault weapon." Those who are careful in their use of terminology point out that the rifles used by the military are designed to be fired in fully automatic mode (squeeze and hold the trigger, and bullets are launched from the muzzle in rapid succession until the magazine is empty), and that only such a "machine gun" is accurately described as an "assault rifle."
When the problem brought into relief by such a terrible incident is captured by the phrase "gun violence," it is easy to understand the focus on the gun. But when we consider that the number of guns in the United States is nearly as large as the number of people, and the overwhelming majority of these guns are never used in acts of violence, it seems logical to suggest that our focus should be on the behavior rather than the implement.
So, while it has been said many times by people on both sides of the debate about the role of firearms in our society that America has a "gun culture" (see, for example, William R. Tonso's 1990 book The Gun Culture and Its Enemies), I believe it may be more helpful to focus on the fact that America has a culture of violence.
Many social scientists have studied and written about the way violence is portrayed in entertainment media. The word "glorified" is often used, and much attention is directed to violence in movies, television shows, and - especially - video games.
When I was growing up I wondered about Saturday morning cartoons. Characters in those programs routinely did terrible things to each other. In the Road Runner cartoons, Wile E. Coyote was endlessly getting blown up or having large, heavy objects fall upon him from great heights. In Tom & Jerry, literally a running cat-and-mouse game, awful things happened to the cat several times per episode. Vivid in my memory is a scene in which Tom (the cat) is struck a fearsome blow to the head with a frying pan by Jerry (the mouse) while he is sleeping. I was especially intrigued by the fact that these cartoon characters nearly always appeared in the very next scene as though nothing had happened to them, and I thought this might mislead young children about the consequences of violent acts. I have since decided that young children are probably a little better than we give them credit for at distinguishing fantasy from reality, and perhaps when the fantasy is a cartoon it is that much easier.
In television shows and cinema, however, it is all about realism. Directors and cinematographers are very much intent on the goal of making everything that happens look quite real. Psychologists have worried about how this may desensitize viewers to violence. I am quite certain it does exactly that. The only question is how much, and with what consequences.
Video games bring an entirely new dimension to this discussion. Their realism is ever greater as the technology advances. Unlike the violence we see on television shows and in the movies, when we play these video games we are not passive spectators. We are the actors.
A quick overview of some of the titles may lead one to suspect the violent nature of the content. With names like Assassin's Creed, Mortal Kombat, Call of Duty: Modern Warfare, and Sniper Elite, it is easy to imagine what participants in these games may be doing. Even the television commercials advertising these games can be more than a bit disturbing, and when the voice-over notes that a game is rated "mature," it is easy to understand why.
Have you ever used Google's specialized search functions? Mostly I use ordinary "search" or "maps." (Regular readers know that for some of my essays I use Google Images to find appropriate graphics. I haven't done that for this one, because the subject is so "graphic" when we just read or talk about it that I didn't think it needed any help.) I am quite fond of Google Scholar, which has become an excellent way to find out what research has been published on a particular subject.
I could tell you that entering the combined search terms "video games" and "violent behavior" yielded 43,000 "hits." But it drives me bonkers when speakers at a scientific meeting show a slide that depicts such results, because that doesn't tell you how many of the results might just be researchers endlessly citing their own and others' work. Most important, it doesn't tell you how much good or important or illuminating research has been published.
So it is necessary to do some careful reading of the stuff we get to by clicking on the links. And thus I'd like you to know about Craig A. Anderson, Ph.D., Distinguished Professor and Director of the Center for the Study of Violence at Iowa State University.
A 2004 paper by Anderson in the Journal of Adolescence introduces the subject:
Some of you are probably thinking that many medical professional organizations are also on record as supporting all manner of gun control legislation. The AAP, for example, has said a home with children is no place for firearms. But there are important differences. There is no evidence that access to guns, or learning how to shoot them for sport, makes people prone to use them to commit violent acts. There are abundant data to the contrary. For example, people who obtain permits to carry concealed handguns for personal protection almost never use their guns in the commission of crimes. In this instance of the intersection of public policy with health policy, by contrast, there are many studies all supporting the same conclusion that violence in entertainment has a real and causal link to violent acts in real life.
There is another important difference to keep in mind. We had a ban on the importation and sale of a vast array of semiautomatic rifles, as well as a ban on high-capacity magazines, on the books for ten years. The statistics kept by the Department of Justice tell us the result in all its simplicity: there was no effect on violent crime committed with firearms.
In case you have forgotten since you started reading, our nation is awash in guns. Limiting the sales of certain new ones will be like looking at a backyard swimming pool, recognizing the danger it poses to the toddler who cannot swim, and deciding we must not add more water to the pool. Unless we intend not only to ban the sale of all new guns but also to confiscate all the old ones, we cannot expect to prevent the commission of violent acts with guns by controlling their availability. For those who may think confiscation is a fine idea, I will note that self defense is a fundamental, natural, human right. In human societies, armed self defense must be available, else the slow, the weak, and the infirm will ever be at the mercy of the young, the fast, and the strong who just happen to be amoral.
Yesterday the NRA held a press conference in which the organization suggested that placing well-trained, armed security guards in schools might be a valuable and effective short-term solution. The NRA was promptly scorned by those who thought - contrary to what anyone who knows anything about the Association's history over the last 50 years would have expected - that the NRA was holding a press conference to announce it would embrace new gun control legislation in a spirit of compromise.
We can enact all sorts of new gun control legislation. We may do exactly that. We will certainly spend a great deal of time talking about the merits of doing so. But the time and effort we devote to this must not detract from the attention directed toward our culture of violence and how we can go about changing that.
Much attention has been focused on the implements used in such mass murders, especially a semiautomatic rifle chambered in a caliber called .223 Remington, also known as 5.56mm NATO. The latter designation has drawn particular note, and such a rifle is often described in news reports as a "military style assault weapon." Those who are careful in their use of terminology point out that the rifles used by the military are designed to be fired in fully automatic mode (squeeze and hold the trigger, and bullets are launched from the muzzle in rapid succession until the magazine is empty), and that only such a "machine gun" is accurately described as an "assault rifle."
When the problem brought into relief by such a terrible incident is captured by the phrase "gun violence," it is easy to understand the focus on the gun. But when we consider that the number of guns in the United States is nearly as large as the number of people, and the overwhelming majority of these guns are never used in acts of violence, it seems logical to suggest that our focus should be on the behavior rather than the implement.
So, while it has been said many times by people on both sides of the debate about the role of firearms in our society that America has a "gun culture" (see, for example, William R. Tonso's 1990 book The Gun Culture and Its Enemies), I believe it may be more helpful to focus on the fact that America has a culture of violence.
Many social scientists have studied and written about the way violence is portrayed in entertainment media. The word "glorified" is often used, and much attention is directed to violence in movies, television shows, and - especially - video games.
When I was growing up I wondered about Saturday morning cartoons. Characters in those programs routinely did terrible things to each other. In the Road Runner cartoons, Wile E. Coyote was endlessly getting blown up or having large, heavy objects fall upon him from great heights. In Tom & Jerry, literally a running cat-and-mouse game, awful things happened to the cat several times per episode. Vivid in my memory is a scene in which Tom (the cat) is struck a fearsome blow to the head with a frying pan by Jerry (the mouse) while he is sleeping. I was especially intrigued by the fact that these cartoon characters nearly always appeared in the very next scene as though nothing had happened to them, and I thought this might mislead young children about the consequences of violent acts. I have since decided that young children are probably a little better than we give them credit for at distinguishing fantasy from reality, and perhaps when the fantasy is a cartoon it is that much easier.
In television shows and cinema, however, it is all about realism. Directors and cinematographers are very much intent on the goal of making everything that happens look quite real. Psychologists have worried about how this may desensitize viewers to violence. I am quite certain it does exactly that. The only question is how much, and with what consequences.
Video games bring an entirely new dimension to this discussion. Their realism is ever greater as the technology advances. Unlike the violence we see on television shows and in the movies, when we play these video games we are not passive spectators. We are the actors.
A quick overview of some of the titles may lead one to suspect the violent nature of the content. With names like Assassin's Creed, Mortal Kombat, Call of Duty: Modern Warfare, and Sniper Elite, it is easy to imagine what participants in these games may be doing. Even the television commercials advertising these games can be more than a bit disturbing, and when the voice-over notes that a game is rated "mature," it is easy to understand why.
Have you ever used Google's specialized search functions? Mostly I use ordinary "search" or "maps." (Regular readers know that for some of my essays I use Google Images to find appropriate graphics. I haven't done that for this one, because the subject is so "graphic" when we just read or talk about it that I didn't think it needed any help.) I am quite fond of Google Scholar, which has become an excellent way to find out what research has been published on a particular subject.
I could tell you that entering the combined search terms "video games" and "violent behavior" yielded 43,000 "hits." But it drives me bonkers when speakers at a scientific meeting show a slide that depicts such results, because that doesn't tell you how many of the results might just be researchers endlessly citing their own and others' work. Most important, it doesn't tell you how much good or important or illuminating research has been published.
So it is necessary to do some careful reading of the stuff we get to by clicking on the links. And thus I'd like you to know about Craig A. Anderson, Ph.D., Distinguished Professor and Director of the Center for the Study of Violence at Iowa State University.
A 2004 paper by Anderson in the Journal of Adolescence introduces the subject:
For many in the general public, the problem of video game violence first emerged with school shootings by avid players of such games at West Paducah, Kentucky (December, 1997); Jonesboro, Arkansas (March, 1998); Springfield, Oregon (May, 1998), and Littleton, Colorado (April, 1999). More recent violent crimes that have been linked to violent video games include a school shooting spree in Santee, California (March, 2001); a violent crime spree in Oakland, California (January, 2003); five homicides in Long Prairie and Minneapolis, Minnesota (May, 2003); beating deaths in Medina, Ohio (November, 2002) and Wyoming, Michigan (November, 2002); school shootings in Wellsboro, Pennsylvania (June, 2003) and Red Lion, Pennsylvania (April, 2003); and the Washington, DC.‘‘Beltway’’ sniper shootings (Fall, 2002).Video game related violent crimes have also been reported in several other industrialized countries, including Germany (April, 2002), and Japan (Sakamoto, 2000).Anderson has devoted many years to this work. He has published extensively. For those of you who are interested, I strongly recommend that you Google him and visit his home page. The more you read, the more you will understand one of his salient conclusions:
...as documented in several articles in this special issue as well as in other recent reports, a lot of youths are playing violent video games for many hours per week. When large numbers of youths (including young adults) are exposed to many hours of media violence (including violent video games), even a small effect can have extremely large societal consequences.The American Academy of Pediatrics has a longstanding and well-developed interest in the effects of exposure to violence in entertainment on the development of children and their behavioral health. Noting that medical professionals have been concerned about portrayals of violence in the media since the 1950s and that the U.S. Surgeon General issued a report on the subject in 1972, the AAP issued a policy statement in 2009 that said, "The evidence is now clear and convincing: media violence is one of the causal factors of real-life violence and aggression."
Some of you are probably thinking that many medical professional organizations are also on record as supporting all manner of gun control legislation. The AAP, for example, has said a home with children is no place for firearms. But there are important differences. There is no evidence that access to guns, or learning how to shoot them for sport, makes people prone to use them to commit violent acts. There are abundant data to the contrary. For example, people who obtain permits to carry concealed handguns for personal protection almost never use their guns in the commission of crimes. In this instance of the intersection of public policy with health policy, by contrast, there are many studies all supporting the same conclusion that violence in entertainment has a real and causal link to violent acts in real life.
There is another important difference to keep in mind. We had a ban on the importation and sale of a vast array of semiautomatic rifles, as well as a ban on high-capacity magazines, on the books for ten years. The statistics kept by the Department of Justice tell us the result in all its simplicity: there was no effect on violent crime committed with firearms.
In case you have forgotten since you started reading, our nation is awash in guns. Limiting the sales of certain new ones will be like looking at a backyard swimming pool, recognizing the danger it poses to the toddler who cannot swim, and deciding we must not add more water to the pool. Unless we intend not only to ban the sale of all new guns but also to confiscate all the old ones, we cannot expect to prevent the commission of violent acts with guns by controlling their availability. For those who may think confiscation is a fine idea, I will note that self defense is a fundamental, natural, human right. In human societies, armed self defense must be available, else the slow, the weak, and the infirm will ever be at the mercy of the young, the fast, and the strong who just happen to be amoral.
Yesterday the NRA held a press conference in which the organization suggested that placing well-trained, armed security guards in schools might be a valuable and effective short-term solution. The NRA was promptly scorned by those who thought - contrary to what anyone who knows anything about the Association's history over the last 50 years would have expected - that the NRA was holding a press conference to announce it would embrace new gun control legislation in a spirit of compromise.
We can enact all sorts of new gun control legislation. We may do exactly that. We will certainly spend a great deal of time talking about the merits of doing so. But the time and effort we devote to this must not detract from the attention directed toward our culture of violence and how we can go about changing that.
Saturday, December 15, 2012
The Wrong Hands
In the context of the recent horrors in which deranged persons have shot and killed scores of innocents, we have renewed discussions of gun control legislation in hopes of averting such tragedies in the future.
After the assassinations of Martin Luther King, Jr. and Bobby Kennedy, Congress passed the Gun Control Act of 1968. Attempts to assassinate American presidents, including one that was very nearly successful in ending Ronald Reagan's life in 1981, continued unabated.
There is a common theme in these tragedies, whether it's John W. Hinckley, Jr. (who shot Reagan), or Seng-Hui Cho, who killed 32 and wounded 17 on the campus of Virginia Tech in 2007, or James Eagan Holmes, who killed 12 and injured 58 in a movie theater in Aurora, Colorado earlier this year ... the list seems endless. And that common theme is mental illness.
Anyone who could shoot a classroom full of kindergartners is insane beyond our ordinary capacity to fathom madness. Mental health professionals understand the disconnection from reality that occurs in the minds of the psychotic. The rest of us can only shake our heads in bewilderment.
On December 7, 1993 (yes, a day that will live in infamy) Colin Ferguson opened fire on passengers on the Long Island Railroad. Among his victims were Dennis and Kevin McCarthy. Dennis was killed, and his son Kevin was seriously injured. Dennis's wife, Carolyn, a nurse, was elected to Congress three years later, on a mission to promote gun control legislation.
In the 16 years since her election, Carolyn McCarthy has introduced many gun control bills. Most have languished in House committees. But after the Virginia Tech shooting, McCarthy realized the shooter could have been disqualified from purchasing firearms if there had been a more robust database of mental health history to be queried by dealers. The National Instant Check System (NICS) was created to assure that people with felony records who are prohibited by federal law from purchasing guns cannot obtain firearms from licensed dealers. McCarthy's bill became the NICS Improvement Amendments Act of 2007 and included funding to beef up the system of getting information about disqualifying mental illness into the database. It was strongly supported by the National Rifle Association and signed into law by President George W. Bush.
Doesn't the NRA oppose all gun control legislation? Obviously not. The NRA is just as keen as everyone else on keeping guns out of "the wrong hands." But McCarthy's bill was opposed by organizations of mental health professionals, including psychologists and psychiatrists, who complained that it would only add to the stigma of mental illness.
Under current federal law, persons who have been hospitalized involuntarily because they are a danger to themselves or others and persons who have been adjudicated mentally incompetent are disqualified from buying guns. The key element is that this has gone through the legal system, which has due process and safeguards against infringing upon the rights of persons who aren't really crazy. This is important, because in many states people can be involuntarily admitted to psychiatric facilites on the say-so of just about anyone. If your spouse thinks you are suicidal and fills out the papers, a constable will take you into custody, and you will be placed in a psychiatric bed somewhere until a judge holds a hearing on the matter within a statutorily specified time, typically 24 hours. Before the hearing, you will be interviewed by a mental health professional. At the hearing, you will be represented by legal counsel. If you are not, in fact, dangerously mad, that will be determined at the hearing, and you will be released. If the judge says you are a danger to yourself or others as a consequence of mental illness, you are then disqualified under federal law from owning firearms. Not only are you prohibited from future purchases, but guns in your possession are to be confiscated.
You can see, quite easily I'm sure, that this approach will fail to identify many people who are mentally ill and potentially very dangerous, because most such people are never processed through the legal system and thus never disqualified from gun ownership.
If you are a patient in my emergency department who suffers a loss of consciousness as a result of some medical condition, such as a seizure, or intoxication, or a precipitous drop in your blood sugar, I am required to fill out a form saying whether the Commonwealth of Pennsylvania should initiate a proceeding to determine whether your driving privileges should be restricted. Most states do not have such laws, and where such laws exist they certainly have unintended consequences, such as deterring patients with seizures from seeking medical care when they should.
Why not have a requirement that I fill out a form for every crazy person I see that indicates whether the state should initiate a legal proceeding to determine whether such a person is dangerous and should be disqualified from possessing firearms?
Essential features of such a system would be the involvement of the legal system, with guaranteed due process and rights of appeal, and the entry into the NICS database of determinations that persons have been disqualified. The role of the legal system is crucial, because a system that disqualified people based on the opinion of a psychiatrist would result in many disqualifications by mental health professionals who simply don't like guns and think people - all people - shouldn't have them.
News reporting and the blogosphere, not to mention innumerable postings on social networking sites, have already made the point that our mental health system is woefully underfunded and inadequate to meet the needs of the population. Ask any emergency physician or nurse about schizophrenics living under bridges, people who in generations gone by would have been long-term residents of state mental hospitals. Might some of those who were "deinstitutionalized" really have been much better off living in group homes instead of state institutions? Absolutely. But what about the ones who are living in large cardboard boxes, with the occasional stint in a homeless shelter or an acute care psychiatric facility, only to wind up back on the street, seeking shelter under a bridge or the warmth of air rising from a grate over a city subway? While our mental health system struggles and fails to meet their needs, how many people who are less overtly crazy get no care at all because the resources simply don't exist?
We can pass all the gun control laws we want with no effect whatsoever on the problem. Until we commit far more resources to our mental health system and devise a mechanism that identifies those too dangerously insane to own firearms, we will have many more tragedies like those in Aurora, Colorado and Newtown, Connecticut.
After the assassinations of Martin Luther King, Jr. and Bobby Kennedy, Congress passed the Gun Control Act of 1968. Attempts to assassinate American presidents, including one that was very nearly successful in ending Ronald Reagan's life in 1981, continued unabated.
There is a common theme in these tragedies, whether it's John W. Hinckley, Jr. (who shot Reagan), or Seng-Hui Cho, who killed 32 and wounded 17 on the campus of Virginia Tech in 2007, or James Eagan Holmes, who killed 12 and injured 58 in a movie theater in Aurora, Colorado earlier this year ... the list seems endless. And that common theme is mental illness.
Anyone who could shoot a classroom full of kindergartners is insane beyond our ordinary capacity to fathom madness. Mental health professionals understand the disconnection from reality that occurs in the minds of the psychotic. The rest of us can only shake our heads in bewilderment.
On December 7, 1993 (yes, a day that will live in infamy) Colin Ferguson opened fire on passengers on the Long Island Railroad. Among his victims were Dennis and Kevin McCarthy. Dennis was killed, and his son Kevin was seriously injured. Dennis's wife, Carolyn, a nurse, was elected to Congress three years later, on a mission to promote gun control legislation.
In the 16 years since her election, Carolyn McCarthy has introduced many gun control bills. Most have languished in House committees. But after the Virginia Tech shooting, McCarthy realized the shooter could have been disqualified from purchasing firearms if there had been a more robust database of mental health history to be queried by dealers. The National Instant Check System (NICS) was created to assure that people with felony records who are prohibited by federal law from purchasing guns cannot obtain firearms from licensed dealers. McCarthy's bill became the NICS Improvement Amendments Act of 2007 and included funding to beef up the system of getting information about disqualifying mental illness into the database. It was strongly supported by the National Rifle Association and signed into law by President George W. Bush.
Doesn't the NRA oppose all gun control legislation? Obviously not. The NRA is just as keen as everyone else on keeping guns out of "the wrong hands." But McCarthy's bill was opposed by organizations of mental health professionals, including psychologists and psychiatrists, who complained that it would only add to the stigma of mental illness.
Under current federal law, persons who have been hospitalized involuntarily because they are a danger to themselves or others and persons who have been adjudicated mentally incompetent are disqualified from buying guns. The key element is that this has gone through the legal system, which has due process and safeguards against infringing upon the rights of persons who aren't really crazy. This is important, because in many states people can be involuntarily admitted to psychiatric facilites on the say-so of just about anyone. If your spouse thinks you are suicidal and fills out the papers, a constable will take you into custody, and you will be placed in a psychiatric bed somewhere until a judge holds a hearing on the matter within a statutorily specified time, typically 24 hours. Before the hearing, you will be interviewed by a mental health professional. At the hearing, you will be represented by legal counsel. If you are not, in fact, dangerously mad, that will be determined at the hearing, and you will be released. If the judge says you are a danger to yourself or others as a consequence of mental illness, you are then disqualified under federal law from owning firearms. Not only are you prohibited from future purchases, but guns in your possession are to be confiscated.
You can see, quite easily I'm sure, that this approach will fail to identify many people who are mentally ill and potentially very dangerous, because most such people are never processed through the legal system and thus never disqualified from gun ownership.
If you are a patient in my emergency department who suffers a loss of consciousness as a result of some medical condition, such as a seizure, or intoxication, or a precipitous drop in your blood sugar, I am required to fill out a form saying whether the Commonwealth of Pennsylvania should initiate a proceeding to determine whether your driving privileges should be restricted. Most states do not have such laws, and where such laws exist they certainly have unintended consequences, such as deterring patients with seizures from seeking medical care when they should.
Why not have a requirement that I fill out a form for every crazy person I see that indicates whether the state should initiate a legal proceeding to determine whether such a person is dangerous and should be disqualified from possessing firearms?
Essential features of such a system would be the involvement of the legal system, with guaranteed due process and rights of appeal, and the entry into the NICS database of determinations that persons have been disqualified. The role of the legal system is crucial, because a system that disqualified people based on the opinion of a psychiatrist would result in many disqualifications by mental health professionals who simply don't like guns and think people - all people - shouldn't have them.
News reporting and the blogosphere, not to mention innumerable postings on social networking sites, have already made the point that our mental health system is woefully underfunded and inadequate to meet the needs of the population. Ask any emergency physician or nurse about schizophrenics living under bridges, people who in generations gone by would have been long-term residents of state mental hospitals. Might some of those who were "deinstitutionalized" really have been much better off living in group homes instead of state institutions? Absolutely. But what about the ones who are living in large cardboard boxes, with the occasional stint in a homeless shelter or an acute care psychiatric facility, only to wind up back on the street, seeking shelter under a bridge or the warmth of air rising from a grate over a city subway? While our mental health system struggles and fails to meet their needs, how many people who are less overtly crazy get no care at all because the resources simply don't exist?
We can pass all the gun control laws we want with no effect whatsoever on the problem. Until we commit far more resources to our mental health system and devise a mechanism that identifies those too dangerously insane to own firearms, we will have many more tragedies like those in Aurora, Colorado and Newtown, Connecticut.
Thursday, December 6, 2012
60 Minutes Exposé: Hospital Care for Dollars?
This week the CBS news magazine, "60 Minutes," included a segment on Health Management Associates alleging that HMA pressured doctors to admit more patients to HMA-run hospitals so as to increase revenues.
HMA, according to 60 Minutes, set goals for doctors to admit a certain percentage of patients who visited the emergency department for care. Specifically, the program said HMA set a goal of 20%, according to emergency physicians at some HMA facilities.
The percentage of patients admitted from the ED varies greatly from one hospital to another. The ED at a tertiary-care hospital in a metropolitan area may see patients with a high likelihood of being seriously ill. The percentage of patients admitted from such an ED may be well upwards of 30 or even 40. On the other hand, the ED at a small-town or rural hospital, where the large majority of ED patients have minor illnesses or injuries, may admit fewer than 10% of the patients to the hospital.
The claim in the 60 Minutes segment was that HMA runs mostly smaller, more rural hospitals, especially hospitals that were financially struggling before being taken over by HMA. Such a hospital would not be expected to admit 20% of its ED patients, and setting that as a goal would put undue pressure on the doctors in the ED to hospitalize patients who don't really need inpatient treatment.
The show's producers found emergency physicians from HMA hospitals who described in detail how they were pressured to admit more patients. They explained that hospital administrators wanted to fill beds, thereby improving the hospital's revenue stream - and its "margin," which is what you call it if you don't like the word "profit," or if you are running a not-for-profit entity (which doesn't mean you don't need to have revenues exceeding expenses).
This makes HMA look bad. But there was a lot 60 Minutes didn't explain. Perhaps the most important is that there is a good deal of subjective judgment involved in deciding whether a sick patient should be hospitalized. It is often not a straightforward matter to figure out whether the evaluation and management of a patient's illness, beyond what has been accomplished during the ED visit, can be carried out at home or requires admission to the hospital.
Medicare, which was a primary focus of the TV report because it pays for so much hospital care, has criteria we can use to judge this, called "severity of illness" (how sick is the patient?) and "intensity of service" (what care does the patient require, and is it best provided in-hospital?) criteria.
HMA was allegedly pressuring doctors to admit patients who met such criteria for hospitalization. What, you might ask, is wrong with that? Well, not everyone who meets criteria for admission actually requires admission. It's just not that simple. And it requires clinical judgment.
And here is where we run into an interesting paradox. The Centers for Medicare and Medicaid Services (CMS - don't ask me what they did with the extra "M") is perfectly happy to have us use clinical judgment to decide that a patient who meets criteria for admission can, instead, be treated as an outpatient. Thank you, doctor. No questions asked. But flip the coin and try to get the hospital paid for taking care of an admitted patient who didn't meet the criteria used by CMS. Good luck with that.
I find this very frustrating, although I would like to say that in any number of ways that would require impolite language. I bristle when my clinical judgment is questioned, especially by people who don't practice medicine but like to tell doctors how to apply their science and their art. We call them bureaucrats, with every bit of the derision and negative connotation that word can carry.
So this is at least partly about clinical judgment. These are judgments we must make many times a day, every day. An emergency physician may initiate a conversation with an internist about a patient by saying, "This is a 'soft admission.'" That means it's a patient the emergency physician feels uncomfortable sending home but that the internist might think doesn't really need to be in the hospital.
During every shift I work, I see patients who might benefit from hospitalization but also might be safely managed as outpatients. I often discuss the options with the patient, the patient's family, and the patient's regular doctor before arriving at a decision agreeable to all concerned. It's not always simple and straightforward.
So you can imagine what HMA was saying to the doctors. You have a patient in the ED and you're considering hospitalizing that patient? Just do it. The hospital has empty beds. We want to fill them. After all, we need the revenue. We need the margin. This hospital exists to serve the health care needs of the community. That is our mission. You know the saying: no margin, no mission.
Should HMA tell doctors at a hospital that admits 7% of its ED patients that they have to get that up to 20%, or tell doctors who don't admit 20% of the ED patients they see that their services will no longer be required? Of course not. And if HMA did that, I would be the first to criticize that practice.
But I must say that there are many motivations for hospitalizing patients who might be "soft admits." The further evaluation and treatment the patient requires might be conducted more expeditiously in the hospital. The likelihood that everything that is envisioned actually gets done is greater. The chances that something is missed, and an adverse outcome results, are reduced. (And maybe, correspondingly, the likelihood of a malpractice lawsuit is reduced.) The satisfaction of the patient and the family with the care provided at the hospital goes up. And everyone in health care wants satisfied patients.
The 60 Minutes segment didn't tell us any of that. No, instead, they intimated that HMA might be guilty of "Medicare fraud." Was HMA providing services to patients that were unnecessary or inappropriate? No evidence of that was set forth. Was HMA billing Medicare for services not provided (which would clearly have been fraud)? That was not even implied.
Far be it from me to say that 60 Minutes, that paragon of investigative journalism, would take a potentially interesting or important story and sensationalize it. Oh, no. Why would they do that? Just for ratings? To get more people to watch the show? To make more money?
We certainly expect our providers of health care to be above such motives. But we have no such expectations of journalists. Should we?
HMA, according to 60 Minutes, set goals for doctors to admit a certain percentage of patients who visited the emergency department for care. Specifically, the program said HMA set a goal of 20%, according to emergency physicians at some HMA facilities.
The percentage of patients admitted from the ED varies greatly from one hospital to another. The ED at a tertiary-care hospital in a metropolitan area may see patients with a high likelihood of being seriously ill. The percentage of patients admitted from such an ED may be well upwards of 30 or even 40. On the other hand, the ED at a small-town or rural hospital, where the large majority of ED patients have minor illnesses or injuries, may admit fewer than 10% of the patients to the hospital.
The claim in the 60 Minutes segment was that HMA runs mostly smaller, more rural hospitals, especially hospitals that were financially struggling before being taken over by HMA. Such a hospital would not be expected to admit 20% of its ED patients, and setting that as a goal would put undue pressure on the doctors in the ED to hospitalize patients who don't really need inpatient treatment.
The show's producers found emergency physicians from HMA hospitals who described in detail how they were pressured to admit more patients. They explained that hospital administrators wanted to fill beds, thereby improving the hospital's revenue stream - and its "margin," which is what you call it if you don't like the word "profit," or if you are running a not-for-profit entity (which doesn't mean you don't need to have revenues exceeding expenses).
This makes HMA look bad. But there was a lot 60 Minutes didn't explain. Perhaps the most important is that there is a good deal of subjective judgment involved in deciding whether a sick patient should be hospitalized. It is often not a straightforward matter to figure out whether the evaluation and management of a patient's illness, beyond what has been accomplished during the ED visit, can be carried out at home or requires admission to the hospital.
Medicare, which was a primary focus of the TV report because it pays for so much hospital care, has criteria we can use to judge this, called "severity of illness" (how sick is the patient?) and "intensity of service" (what care does the patient require, and is it best provided in-hospital?) criteria.
HMA was allegedly pressuring doctors to admit patients who met such criteria for hospitalization. What, you might ask, is wrong with that? Well, not everyone who meets criteria for admission actually requires admission. It's just not that simple. And it requires clinical judgment.
And here is where we run into an interesting paradox. The Centers for Medicare and Medicaid Services (CMS - don't ask me what they did with the extra "M") is perfectly happy to have us use clinical judgment to decide that a patient who meets criteria for admission can, instead, be treated as an outpatient. Thank you, doctor. No questions asked. But flip the coin and try to get the hospital paid for taking care of an admitted patient who didn't meet the criteria used by CMS. Good luck with that.
I find this very frustrating, although I would like to say that in any number of ways that would require impolite language. I bristle when my clinical judgment is questioned, especially by people who don't practice medicine but like to tell doctors how to apply their science and their art. We call them bureaucrats, with every bit of the derision and negative connotation that word can carry.
So this is at least partly about clinical judgment. These are judgments we must make many times a day, every day. An emergency physician may initiate a conversation with an internist about a patient by saying, "This is a 'soft admission.'" That means it's a patient the emergency physician feels uncomfortable sending home but that the internist might think doesn't really need to be in the hospital.
During every shift I work, I see patients who might benefit from hospitalization but also might be safely managed as outpatients. I often discuss the options with the patient, the patient's family, and the patient's regular doctor before arriving at a decision agreeable to all concerned. It's not always simple and straightforward.
So you can imagine what HMA was saying to the doctors. You have a patient in the ED and you're considering hospitalizing that patient? Just do it. The hospital has empty beds. We want to fill them. After all, we need the revenue. We need the margin. This hospital exists to serve the health care needs of the community. That is our mission. You know the saying: no margin, no mission.
Should HMA tell doctors at a hospital that admits 7% of its ED patients that they have to get that up to 20%, or tell doctors who don't admit 20% of the ED patients they see that their services will no longer be required? Of course not. And if HMA did that, I would be the first to criticize that practice.
But I must say that there are many motivations for hospitalizing patients who might be "soft admits." The further evaluation and treatment the patient requires might be conducted more expeditiously in the hospital. The likelihood that everything that is envisioned actually gets done is greater. The chances that something is missed, and an adverse outcome results, are reduced. (And maybe, correspondingly, the likelihood of a malpractice lawsuit is reduced.) The satisfaction of the patient and the family with the care provided at the hospital goes up. And everyone in health care wants satisfied patients.
The 60 Minutes segment didn't tell us any of that. No, instead, they intimated that HMA might be guilty of "Medicare fraud." Was HMA providing services to patients that were unnecessary or inappropriate? No evidence of that was set forth. Was HMA billing Medicare for services not provided (which would clearly have been fraud)? That was not even implied.
Far be it from me to say that 60 Minutes, that paragon of investigative journalism, would take a potentially interesting or important story and sensationalize it. Oh, no. Why would they do that? Just for ratings? To get more people to watch the show? To make more money?
We certainly expect our providers of health care to be above such motives. But we have no such expectations of journalists. Should we?
Thursday, November 29, 2012
Does Flu Vaccination Help Your Heart?
We seem always to be looking for health benefits from things that might otherwise be seen as "guilty pleasures," such as red wine, coffee, and chocolate. And we also look for additional health benefits from things we do for a specific reason. A high-fiber diet, for example, may not only help keep you "regular" (don't you love our euphemisms for digestive functions?) but may also help lower cholesterol and reduce your risk of colon cancer.
So it comes as no surprise that medical scientists are investigating whether influenza vaccination may be good for something besides making you less likely to get influenza. They probably think we need extra motivation, because many of us don't take the flu that seriously, and we're not keen on getting shots. Maybe if we think it's good for more than just protection from the flu, we'll be more likely to go for it.
In this instance, Canadian researchers in cardiology have found that vaccination seems to have beneficial cardiovascular effects: specifically, a substantial reduction in the likelihood of heart attack, stroke, or death from cardiovascular causes.
When I started reading the article reporting this research, in a popular news outlet, I thought it was an association that could have any number of possible causes. We see so many studies of that sort. Scientists look at people with various health problems and see that those who got a particular intervention (like a vaccine) had fewer bad things happen to them. Then the question is always whether it was the intervention of interest that conferred the benefit, or just that people who received that intervention were receiving regular medical care of all sorts, and who knows what, among all the things done for them, was really responsible. Even when you try to adjust for differences in all of those other things, you can still be missing possibly responsible influences you didn't think of.
But this study was not reporting an association in search of a possible cause-and-effect relationship. These investigators took a population of patients (with a reasonable sample size) and randomized them to influenza vaccine or placebo. That's the kind of study it takes to see whether the one thing you're interested in is responsible for observed differences in outcomes. If the sample size is large enough, and the patients are randomized to one intervention or another, or intervention versus placebo, all of the other factors that might cause different results for the two groups of patients should be very similar, thus isolating the one difference you're studying.
The results, as reported, are pretty striking: a 50% reduction in heart attack or stroke and a 40% reduction in mortality. And those results seemed to apply to patients both with and without previously diagnosed cardiovascular disease.
If this is real, the question is why? There have been many studies over the years looking at the relationship between infection and inflammation and bad things happening in blood vessels. Some studies have suggested that patients who'd been treated, for one reason or another, with certain kinds of antibiotics over the years seemed to have fewer heart attacks. Researchers guessed that certain infections might predispose to inflammation, and subsequent development of vessel-narrowing plaque, in coronary arteries.
The U.S. Centers for Disease Control and Prevention is telling us everyone should get the flu vaccine every year. Many of us are not listening. I am in the camp of the skeptics, as I am in relation to just about everything. I want to see evidence that the vaccine substantially lowers my statistical likelihood not just of getting influenza but of becoming seriously ill with influenza, and that the magnitude of this benefit greatly exceeds the magnitude of the risk of a serious adverse reaction to the vaccine. Over the years I have found the evidence of such a favorable risk-benefit calculation to be reasonably convincing for older folks and those with chronic diseases (heart and lung diseases and diabetes), but not so convincing for younger and otherwise healthy people.
And I've been less than impressed with the scientific evidence that all health care workers should get the vaccine to keep from spreading the flu from their infected patients to others who are susceptible. It makes sense, but the evidence that it really works that way just isn't compelling. So I get my flu shot to keep my employer happy, but I remain skeptical.
But the idea of other benefits has definite appeal. I often take ibuprofen for various aches and pains. The fact that there is a little bit of suggestive evidence that it reduces the likelihood of developing Alzheimer's Disease suits me just fine. If I'm going to do it anyway, an unexpected benefit is welcome. Now, that is an example of an association that may or may not have causality. No one has done a randomized, placebo-controlled study and followed patients long term, which is what you'd have to do, because that's a disease that develops over a period of many years.
In this case, however, the causality may be real, because the study was done in such a way as to figure that out. Notice I say it may be real. Why am I still skeptical? Well, to begin with, I haven't read all the details of the study. I read a report in the popular press of the presentation of the study's results at a medical meeting in Toronto. I don't know if the study has been accepted for publication in a reputable, peer-reviewed medical journal. Once that happens, if it does, I'll be able to read the paper and draw firm conclusions about its results. Many papers are presented at meetings and never get published. And many papers that get published don't really prove what the authors think or say they do. And then, of course, any important scientific study should be reproducible - meaning if other scientists conduct another study in the same way, they should get similar results. Reproducibility is essential to credibility in scientific investigation.
So all I can say right now is that this is very intriguing, and if it turns out to be real, we will all have another reason to get the influenza vaccine each year.
So it comes as no surprise that medical scientists are investigating whether influenza vaccination may be good for something besides making you less likely to get influenza. They probably think we need extra motivation, because many of us don't take the flu that seriously, and we're not keen on getting shots. Maybe if we think it's good for more than just protection from the flu, we'll be more likely to go for it.
In this instance, Canadian researchers in cardiology have found that vaccination seems to have beneficial cardiovascular effects: specifically, a substantial reduction in the likelihood of heart attack, stroke, or death from cardiovascular causes.
When I started reading the article reporting this research, in a popular news outlet, I thought it was an association that could have any number of possible causes. We see so many studies of that sort. Scientists look at people with various health problems and see that those who got a particular intervention (like a vaccine) had fewer bad things happen to them. Then the question is always whether it was the intervention of interest that conferred the benefit, or just that people who received that intervention were receiving regular medical care of all sorts, and who knows what, among all the things done for them, was really responsible. Even when you try to adjust for differences in all of those other things, you can still be missing possibly responsible influences you didn't think of.
But this study was not reporting an association in search of a possible cause-and-effect relationship. These investigators took a population of patients (with a reasonable sample size) and randomized them to influenza vaccine or placebo. That's the kind of study it takes to see whether the one thing you're interested in is responsible for observed differences in outcomes. If the sample size is large enough, and the patients are randomized to one intervention or another, or intervention versus placebo, all of the other factors that might cause different results for the two groups of patients should be very similar, thus isolating the one difference you're studying.
The results, as reported, are pretty striking: a 50% reduction in heart attack or stroke and a 40% reduction in mortality. And those results seemed to apply to patients both with and without previously diagnosed cardiovascular disease.
If this is real, the question is why? There have been many studies over the years looking at the relationship between infection and inflammation and bad things happening in blood vessels. Some studies have suggested that patients who'd been treated, for one reason or another, with certain kinds of antibiotics over the years seemed to have fewer heart attacks. Researchers guessed that certain infections might predispose to inflammation, and subsequent development of vessel-narrowing plaque, in coronary arteries.
The U.S. Centers for Disease Control and Prevention is telling us everyone should get the flu vaccine every year. Many of us are not listening. I am in the camp of the skeptics, as I am in relation to just about everything. I want to see evidence that the vaccine substantially lowers my statistical likelihood not just of getting influenza but of becoming seriously ill with influenza, and that the magnitude of this benefit greatly exceeds the magnitude of the risk of a serious adverse reaction to the vaccine. Over the years I have found the evidence of such a favorable risk-benefit calculation to be reasonably convincing for older folks and those with chronic diseases (heart and lung diseases and diabetes), but not so convincing for younger and otherwise healthy people.
And I've been less than impressed with the scientific evidence that all health care workers should get the vaccine to keep from spreading the flu from their infected patients to others who are susceptible. It makes sense, but the evidence that it really works that way just isn't compelling. So I get my flu shot to keep my employer happy, but I remain skeptical.
But the idea of other benefits has definite appeal. I often take ibuprofen for various aches and pains. The fact that there is a little bit of suggestive evidence that it reduces the likelihood of developing Alzheimer's Disease suits me just fine. If I'm going to do it anyway, an unexpected benefit is welcome. Now, that is an example of an association that may or may not have causality. No one has done a randomized, placebo-controlled study and followed patients long term, which is what you'd have to do, because that's a disease that develops over a period of many years.
In this case, however, the causality may be real, because the study was done in such a way as to figure that out. Notice I say it may be real. Why am I still skeptical? Well, to begin with, I haven't read all the details of the study. I read a report in the popular press of the presentation of the study's results at a medical meeting in Toronto. I don't know if the study has been accepted for publication in a reputable, peer-reviewed medical journal. Once that happens, if it does, I'll be able to read the paper and draw firm conclusions about its results. Many papers are presented at meetings and never get published. And many papers that get published don't really prove what the authors think or say they do. And then, of course, any important scientific study should be reproducible - meaning if other scientists conduct another study in the same way, they should get similar results. Reproducibility is essential to credibility in scientific investigation.
So all I can say right now is that this is very intriguing, and if it turns out to be real, we will all have another reason to get the influenza vaccine each year.
Friday, November 23, 2012
Breast Cancer Screening: Can We Think Too Pink?
The experts at the American Cancer Society (ACS) recommend screening for breast cancer by mammography every year for women over 40. Wow. That's a lot of testing. Judging by conversations I've heard, and overheard, and innumerable cartoons I've seen, mammography is not high on any woman's list of fun things to do. So maybe the recommendations of the U.S. Preventive Services Task Force (USPSTF) seem more appealing: every other year, starting at age 50, going through age 74.
[Before I go on, allow me a momentary digression into one of my pet peeves in the use of terminology. This is not cancer prevention. Cancer screening does not prevent disease. It may detect it early and make it possible to cure it, thereby preventing a cancer death. We don't know very much about preventing cancer.]
So how are we doing with mammography? Are we saving lives?
Eighteen months ago (5-28-2011) I wrote an essay for this blog on the general subject of preventive medicine and touched briefly on screening mammography. I mentioned a book by Welch and colleagues (Overdiagnosed: Making People Sick in the Pursuit of Health). Now Dr. H. Gilbert Welch is the second author (first author Archie Bleyer) of a paper published in the New England Journal of Medicine, one of the world's leading English-language medical journals. Bleyer and Welch posed this very question. They looked at three decades of data and found a substantial increase in detection, via mammography, of early breast cancer. They did not, however, find a corresponding reduction in the diagnosis of late-stage breast cancer. Specifically, cases detected early more than doubled, while cases diagnosed late declined by about 8%.
Why is that important?
If we've been advocating screening mammography for women over 40, and we say there has been a 28% decline in breast cancer deaths in this group, we might be inclined to put two and two together and say it's working. But if early detection isn't substantially reducing cases not diagnosed until more advanced stages, the logical conclusion would be that improved treatment, not earlier diagnosis, accounts for most of the reduction in mortality.
According to the ACS, two to four out of 1,000 mammograms lead to a diagnosis of cancer. Let's take the middle number (three) and ask what happens to the other 997 patients. They all get a note saying, "Negative again, thanks for choosing Pink Mammography Services, see you next year." Right? Well, not exactly. Some of them have findings that are not so straightforward. Some of them wind up getting additional tests, like ultrasound examination of the breast or MRI. Some of them undergo surgical biopsies. They spend a lot of time worrying about whether they are harboring a life-threatening malignancy before being told, ultimately, that the conclusion is a benign one.
What about the ones who are diagnosed with early breast cancer? Well, they all get treatment (assuming they follow their doctors' advice and recommendations). And each such case represents a breast cancer death prevented. Right? Assuming, that is, that the long-term outcome is that the woman dies from something else. (After all, that is one of the facts of life in a human body: the long-term mortality rate is 100%, sometimes quoted as one per person.)
Well, to be completely honest, we don't know. It is entirely possible that some of these early cases were never going to progress to advanced disease and eventually cause death. And the uncertainty about that was what led Bleyer and Welch to examine more than thirty years' worth of data.
These studies in the realm of epidemiology, public health, and the effects of medical interventions on large populations are difficult to do and more difficult to interpret. But again, the central finding by the authors was that a large increase (137%) in the detection of cases of early breast cancer was accompanied by a decline of only 8% in the rate of detection of late-stage breast cancer. And this suggests that other factors, such as more effective treatment of cases detected at later stages, are playing a substantial role in the reduction of the mortality rate from breast cancer.
One of the things we know about the behavior of some cancers is that there are people who harbor these diseases for many years and ultimately die from something else. Thus it is reasonable to surmise that some women with breast cancer that can be detected by mammography when they have no symptoms would, if never diagnosed, live many more years and go on to die from an unrelated cause. How common is that?
The answer from Bleyer and Welch:
So what do you do as an individual woman? First, and especially if you are between 40 and 50, you have to decide whether to follow the ACS or USPSTF recommendations. Talk to your doctor, and hope he or she really understands the science well enough to answer your questions. If your mammogram is abnormal, no one is going to tell you that you should just wait and see. And these data don't tell us that's a good idea, because we cannot tell which cases detected early will ultimately be a threat to life and which ones will not.
But when we look at these numbers on a population scale, we should ask ourselves what impact screening for early detection is really having on outcomes. After all, everything we do in health care costs money, and the supply is limited. We must always ask ourselves the kinds of tough questions that are answered by what policy wonks call a cost-effectiveness analysis, where the amount of money devoted to something is looked at in terms of dollars per "quality-adjusted life-year" (QALY).
If the patient is you, or a loved one, no amount of money seems too much to save a life. But as a society, we must recognize that resources are finite and figure out where the money will do the most good for the most people. If we're wondering - and we always should be - about the value of screening tests - Bleyer and Welch have given us more to think about.
[Before I go on, allow me a momentary digression into one of my pet peeves in the use of terminology. This is not cancer prevention. Cancer screening does not prevent disease. It may detect it early and make it possible to cure it, thereby preventing a cancer death. We don't know very much about preventing cancer.]
So how are we doing with mammography? Are we saving lives?
Eighteen months ago (5-28-2011) I wrote an essay for this blog on the general subject of preventive medicine and touched briefly on screening mammography. I mentioned a book by Welch and colleagues (Overdiagnosed: Making People Sick in the Pursuit of Health). Now Dr. H. Gilbert Welch is the second author (first author Archie Bleyer) of a paper published in the New England Journal of Medicine, one of the world's leading English-language medical journals. Bleyer and Welch posed this very question. They looked at three decades of data and found a substantial increase in detection, via mammography, of early breast cancer. They did not, however, find a corresponding reduction in the diagnosis of late-stage breast cancer. Specifically, cases detected early more than doubled, while cases diagnosed late declined by about 8%.
Why is that important?
If we've been advocating screening mammography for women over 40, and we say there has been a 28% decline in breast cancer deaths in this group, we might be inclined to put two and two together and say it's working. But if early detection isn't substantially reducing cases not diagnosed until more advanced stages, the logical conclusion would be that improved treatment, not earlier diagnosis, accounts for most of the reduction in mortality.
According to the ACS, two to four out of 1,000 mammograms lead to a diagnosis of cancer. Let's take the middle number (three) and ask what happens to the other 997 patients. They all get a note saying, "Negative again, thanks for choosing Pink Mammography Services, see you next year." Right? Well, not exactly. Some of them have findings that are not so straightforward. Some of them wind up getting additional tests, like ultrasound examination of the breast or MRI. Some of them undergo surgical biopsies. They spend a lot of time worrying about whether they are harboring a life-threatening malignancy before being told, ultimately, that the conclusion is a benign one.
What about the ones who are diagnosed with early breast cancer? Well, they all get treatment (assuming they follow their doctors' advice and recommendations). And each such case represents a breast cancer death prevented. Right? Assuming, that is, that the long-term outcome is that the woman dies from something else. (After all, that is one of the facts of life in a human body: the long-term mortality rate is 100%, sometimes quoted as one per person.)
Well, to be completely honest, we don't know. It is entirely possible that some of these early cases were never going to progress to advanced disease and eventually cause death. And the uncertainty about that was what led Bleyer and Welch to examine more than thirty years' worth of data.
These studies in the realm of epidemiology, public health, and the effects of medical interventions on large populations are difficult to do and more difficult to interpret. But again, the central finding by the authors was that a large increase (137%) in the detection of cases of early breast cancer was accompanied by a decline of only 8% in the rate of detection of late-stage breast cancer. And this suggests that other factors, such as more effective treatment of cases detected at later stages, are playing a substantial role in the reduction of the mortality rate from breast cancer.
One of the things we know about the behavior of some cancers is that there are people who harbor these diseases for many years and ultimately die from something else. Thus it is reasonable to surmise that some women with breast cancer that can be detected by mammography when they have no symptoms would, if never diagnosed, live many more years and go on to die from an unrelated cause. How common is that?
The answer from Bleyer and Welch:
After excluding the transient excess incidence associated with hormone-replacement therapy and adjusting for trends in the incidence of breast cancer among women younger than 40 years of age, we estimated that breast cancer was overdiagnosed (i.e., tumors were detected on screening that would never have led to clinical symptoms) in 1.3 million U.S. women in the past 30 years. We estimated that in 2008, breast cancer was overdiagnosed in more than 70,000 women; this accounted for 31% of all breast cancers diagnosed.Now we have some numbers to ponder. Even if the 1.3 million over three decades, the 70,000 in the year 2008, and the 31% are not exactly right, they should give us pause. At the very least, they should tell us that we need to know much more about how to figure out which early breast cancers really need treatment and which cases may not.
So what do you do as an individual woman? First, and especially if you are between 40 and 50, you have to decide whether to follow the ACS or USPSTF recommendations. Talk to your doctor, and hope he or she really understands the science well enough to answer your questions. If your mammogram is abnormal, no one is going to tell you that you should just wait and see. And these data don't tell us that's a good idea, because we cannot tell which cases detected early will ultimately be a threat to life and which ones will not.
But when we look at these numbers on a population scale, we should ask ourselves what impact screening for early detection is really having on outcomes. After all, everything we do in health care costs money, and the supply is limited. We must always ask ourselves the kinds of tough questions that are answered by what policy wonks call a cost-effectiveness analysis, where the amount of money devoted to something is looked at in terms of dollars per "quality-adjusted life-year" (QALY).
If the patient is you, or a loved one, no amount of money seems too much to save a life. But as a society, we must recognize that resources are finite and figure out where the money will do the most good for the most people. If we're wondering - and we always should be - about the value of screening tests - Bleyer and Welch have given us more to think about.
Saturday, November 17, 2012
Obamacare: The National Road to Where?
In the middle of the 18th century, George Washington was a colonel with the British army, assigned to do something about the competition between the French and the British colonists for the trade with Native Americans. The exchange of manufactured goods desired by the Indians and the fur pelts they could provide in trade was a very profitable business. The British could usually offer better trading deals, but the French had been cultivating the friendship of the Indians much longer. Each saw the other as infringing on its territory, and the Battle of Fort Necessity, in the Laurel Highlands of southwestern Pennsylvania, launched the French & Indian War, a microcosm of the global conflict between the French and British imperial powers, known in Europe as the Seven Years War (1756-1763).
Washington's early experience in the region led him to believe that a good road from the East through the Allegheny Mountains was essential to development of Western lands and expansion of what was to become a new nation. His vision later became the National Road from Cumberland, Maryland to Wheeling, Virginia. (That part became West Virginia during the Civil War, and the National Road later carried travelers much farther west.) Very substantial funding was approved when Thomas Jefferson was president. Jefferson worried that the project would become a great sinkhole into which money would disappear, imagining that every member of Congress would be trying to secure contracts for friends. It is unclear whether Jefferson was the first to worry about profligate federal spending on a "pork barrel" project, but what he wrote at the time was to be mirrored in many criticisms of public spending over the next two centuries.
Alexander Hamilton had a vision of a powerful central government that would collect taxes and spend money, helping to expand the United States with federal subsidies for "internal improvements" such as roads and canals and a strong central bank to foster commerce. This was developed further, in the second quarter of the 19th century, into Henry Clay's "American System."
All along the way, there have been powerful dissenting voices. Just as Hamilton and Jefferson had opposing views on the merits of a strong central government, there have, ever since, been dramatic differences in political philosophy between those who believe in using the power of the federal government to tax and spend to "provide for the general welfare" (the phrase used in the Constitution's description of the powers of Congress) and those who believe most of these important functions should be carried out by the states or left in private hands.
Hamilton and Jefferson never imagined public financing of the nation's system of health care, likely at least in part because two centuries ago health care had relatively little to offer. Louis Pasteur, who deserves much credit for development of the germ theory of disease, wasn't born until 1822, and antibiotics would have to wait another century to begin to cure our ills. In the run-up to the enactment of Obamacare, some students of history pointed to a 1798 law providing for public funding of hospitals to care for sick and disabled sailors, paid for through a tax on private marine merchants.
Whether one can - or should - extrapolate from a system of health care for veterans to a national health service (like the one in Britain) that takes care of everyone is very much an open question. For each veteran of our armed services who praises the VA health care system and relies upon it exclusively, it is easy to find another who is glad to have access to the private system and sees it as vastly superior in quality.
Obamacare does not create a National Health Service.
But there are many among both its proponents and its enemies who see it as the first steps down that National Road.
Some, including your faithful essayist, see it as a move - welcome, but insufficient - toward universal coverage. As you know if you are a regular reader, I regard our status as the only nation in the industrialized West to fail to provide universal coverage for, and universal access to, health care to be a national disgrace.
The new law requires everyone to have health insurance. This is to be accomplished through a hodgepodge of mechanisms, from mandated purchasing (with subsidies for the needy) through health insurance exchanges to expansion of the publicly funded Medicaid system for the more severely needy. But the enforcement mechanism for the mandate is weak, the subsidies are likely to prove inadequate, and the states have been told by the Supreme Court that Congress cannot make them expand Medicaid. So patients, doctors, and hospitals are waiting, none too optimistically, to see how this all plays out.
I may be among the least optimistic. I believe the number of uninsured, now standing at about 50 million, will drop by no more than half in the next decade unless we do much more to change the way health care is financed in this country.
The question remains how we should go about it. Should we have a mix of public and private mechanisms for financing purchase of health care services, such as we have now? If so, how will we cover everyone, when so many will continue to find private insurance unaffordable? If we address that problem by subsidizing the purchase of private insurance very generously, how can we avoid enriching the health insurance industry (and its "fat cat" CEOs and stockholders)? To carry that a step further, is it even possible to expand health insurance to cover everyone without either enriching or eliminating the private health insurance industry? Either we subsidize the purchase of private health insurance so generously that everyone can afford it (and it becomes even more profitable than it is now), or we fail to do so, in which case we must expand public financing so greatly that everyone who can move into the less-costly public system will do so, and the private health insurance industry will serve only the most affluent.
I do not claim to have the answers to these questions. I have opinions about what would work, what would be efficient, and what it would take to ensure high quality. But there are powerful interests opposing change, and vast swaths of the general public stand opposed to change, because they are satisfied with what they have in the current system. If you are not suffering, it is more difficult to see how things could be so much better.
Remember Washington, looking at the precursor to the National Road. Soldiers, though they might curse it, could march along that road. Horses could negotiate it, if not without many a stumble and an occasional fall. Wagons could make it through in good weather, though they were likely to get mired down if it had rained recently. Washington thought it should be wide and smooth. Think about your road to readily accessible, high-quality health care. Have you gotten mired down in that muck? Should we not build a road that is wide and smooth?
Washington's early experience in the region led him to believe that a good road from the East through the Allegheny Mountains was essential to development of Western lands and expansion of what was to become a new nation. His vision later became the National Road from Cumberland, Maryland to Wheeling, Virginia. (That part became West Virginia during the Civil War, and the National Road later carried travelers much farther west.) Very substantial funding was approved when Thomas Jefferson was president. Jefferson worried that the project would become a great sinkhole into which money would disappear, imagining that every member of Congress would be trying to secure contracts for friends. It is unclear whether Jefferson was the first to worry about profligate federal spending on a "pork barrel" project, but what he wrote at the time was to be mirrored in many criticisms of public spending over the next two centuries.
Alexander Hamilton had a vision of a powerful central government that would collect taxes and spend money, helping to expand the United States with federal subsidies for "internal improvements" such as roads and canals and a strong central bank to foster commerce. This was developed further, in the second quarter of the 19th century, into Henry Clay's "American System."
All along the way, there have been powerful dissenting voices. Just as Hamilton and Jefferson had opposing views on the merits of a strong central government, there have, ever since, been dramatic differences in political philosophy between those who believe in using the power of the federal government to tax and spend to "provide for the general welfare" (the phrase used in the Constitution's description of the powers of Congress) and those who believe most of these important functions should be carried out by the states or left in private hands.
Hamilton and Jefferson never imagined public financing of the nation's system of health care, likely at least in part because two centuries ago health care had relatively little to offer. Louis Pasteur, who deserves much credit for development of the germ theory of disease, wasn't born until 1822, and antibiotics would have to wait another century to begin to cure our ills. In the run-up to the enactment of Obamacare, some students of history pointed to a 1798 law providing for public funding of hospitals to care for sick and disabled sailors, paid for through a tax on private marine merchants.
Whether one can - or should - extrapolate from a system of health care for veterans to a national health service (like the one in Britain) that takes care of everyone is very much an open question. For each veteran of our armed services who praises the VA health care system and relies upon it exclusively, it is easy to find another who is glad to have access to the private system and sees it as vastly superior in quality.
Obamacare does not create a National Health Service.
But there are many among both its proponents and its enemies who see it as the first steps down that National Road.
Some, including your faithful essayist, see it as a move - welcome, but insufficient - toward universal coverage. As you know if you are a regular reader, I regard our status as the only nation in the industrialized West to fail to provide universal coverage for, and universal access to, health care to be a national disgrace.
The new law requires everyone to have health insurance. This is to be accomplished through a hodgepodge of mechanisms, from mandated purchasing (with subsidies for the needy) through health insurance exchanges to expansion of the publicly funded Medicaid system for the more severely needy. But the enforcement mechanism for the mandate is weak, the subsidies are likely to prove inadequate, and the states have been told by the Supreme Court that Congress cannot make them expand Medicaid. So patients, doctors, and hospitals are waiting, none too optimistically, to see how this all plays out.
I may be among the least optimistic. I believe the number of uninsured, now standing at about 50 million, will drop by no more than half in the next decade unless we do much more to change the way health care is financed in this country.
The question remains how we should go about it. Should we have a mix of public and private mechanisms for financing purchase of health care services, such as we have now? If so, how will we cover everyone, when so many will continue to find private insurance unaffordable? If we address that problem by subsidizing the purchase of private insurance very generously, how can we avoid enriching the health insurance industry (and its "fat cat" CEOs and stockholders)? To carry that a step further, is it even possible to expand health insurance to cover everyone without either enriching or eliminating the private health insurance industry? Either we subsidize the purchase of private health insurance so generously that everyone can afford it (and it becomes even more profitable than it is now), or we fail to do so, in which case we must expand public financing so greatly that everyone who can move into the less-costly public system will do so, and the private health insurance industry will serve only the most affluent.
I do not claim to have the answers to these questions. I have opinions about what would work, what would be efficient, and what it would take to ensure high quality. But there are powerful interests opposing change, and vast swaths of the general public stand opposed to change, because they are satisfied with what they have in the current system. If you are not suffering, it is more difficult to see how things could be so much better.
Remember Washington, looking at the precursor to the National Road. Soldiers, though they might curse it, could march along that road. Horses could negotiate it, if not without many a stumble and an occasional fall. Wagons could make it through in good weather, though they were likely to get mired down if it had rained recently. Washington thought it should be wide and smooth. Think about your road to readily accessible, high-quality health care. Have you gotten mired down in that muck? Should we not build a road that is wide and smooth?
We should. And we must.
Friday, November 9, 2012
Legalization of Marijuana
Earlier this week voters in Colorado and Washington state approved ballot measures legalizing recreational use of marijuana. While similar proposals have fallen short of approval in California, it is reasonable to surmise that other states will follow, and a trend will emerge. Supporters are applauding the end of "prohibition," likening this development to the adoption of the 21st Amendment to the United States Constitution in 1933, repealing the 18th Amendment (1920).
As you know if you read my essay "Temperance and the Addict" last June, I have an academic interest in the use of mood-altering substances, and the effects of changing the status of marijuana from illegal to legal will surely be fascinating to observe.
["Medical marijuana," by the way, is a subject to which I haven't paid much attention, because I think the scientific literature on that is more confusing than enlightening, and I have been inclined to agree with those who perceived the movement to legalize the drug for medicinal purposes to be nothing more than a smokescreen (pun intended) for legitimizing recreational use.]
In the practice of emergency medicine I frequently see the consequences of the choices people make about the use and abuse of tobacco and alcohol. Much has been written comparing marijuana with those two legal drugs. Even an overview of those comparisons would take a book chapter rather than an essay. For example, the long-term health effects of tobacco and alcohol are well understood. Marijuana, not so much.
A column ("The End of the War on Marijuana") published at CNN.com caught my eye early this morning. The writer, Roger Roffman, is a professor emeritus of social work at the University of Washington and a supporter of the legalization measure in that state. Roffman made two observations that I found especially noteworthy:
One of the aspects of substance use that I often think about is direct cost to the consumer. While the price of a pack of cigarettes varies with state taxes, not to mention choice of brand versus generic or purchase by the pack or carton, I have a pretty good idea what people are spending on tobacco when they tell me how much they smoke. I cannot help being amazed that some people whose resources are very limited choose to spend thousands of dollars a year on cigarettes.
It was my sense that prices for marijuana are quite variable, and a Web search for information proved that to be correct. I found a wide range of reported "retail prices," mostly between $200 and $1,000 per ounce. Knowing next to nothing about smoking the stuff, I did more research to discover that people typically roll joints containing anywhere from one half to one gram each, which means one might get anywhere from 30-60 joints from an ounce. At $300/ounce, a joint might cost $5 to $10. This was an eye opener for me. I had figured marijuana was much more expensive than tobacco per cigarette, but I didn't realize one joint cost as much as a pack of cigarettes.
Of course, users don't typically smoke 20 joints per day. As I browsed the Web, the self-reported patterns I found for regular users were mostly in the range of 1-5 grams per day, with one person claiming 15 grams per day when he had large amounts of money (and could afford to spend large amounts of time completely disconnected from reality).
So the bottom line on the expense for regular users seems to be about the same as for a pack-a-day cigarette smoker at the low end (one joint per day) to much more than that (perhaps $50/day) for heavy users. I'm guessing the heavier users likely are able to get better prices by going up the supply chain and paying wholesale prices. Smoke what you want and sell the rest.
We know a lot about the damaging effects of smoking tobacco on health: heart attack, stroke, peripheral arterial disease, COPD, and lung cancer, just to name the most common problems. We know far less about what regular heavy smoking of marijuana does to the lungs, but I think we are likely to have much more data in the years to come.
We also know a good deal about the harmful effects of alcohol, including disease of the liver and other organs in the digestive system as well as the brain. In emergency medicine, while we see plenty of that, we see a great deal of trouble caused by acute intoxication, especially motor vehicle crashes.
So I wanted to find out how smoking a reefer might compare with alcohol consumption for getting behind the wheel. As you might imagine, the data are not abundant (yet), but here is what the National Highway Traffic Safety Administration (NHTSA) says:
One news story about the new law in Colorado said the state expected, through the regulation of sale and the imposition of taxes, to bring about $60 million a year into the state treasury, while saving about $75 million in costs to the penal system associated with the criminalization of sale, possession, and use of marijuana. That's a lot of money going to the state's bottom line.
The obvious question: will it be worth it? The answer: we don't know, because we have no way to calculate the cost of much more widespread use resulting from its effects on health and behavior. If the record of tobacco and alcohol is even modestly instructive, we may be in for some rude surprises. As my regular readers know, I think we do an awful lot of things in our society as if we have never heard of the Law of Unintended Consequences.
As you know if you read my essay "Temperance and the Addict" last June, I have an academic interest in the use of mood-altering substances, and the effects of changing the status of marijuana from illegal to legal will surely be fascinating to observe.
["Medical marijuana," by the way, is a subject to which I haven't paid much attention, because I think the scientific literature on that is more confusing than enlightening, and I have been inclined to agree with those who perceived the movement to legalize the drug for medicinal purposes to be nothing more than a smokescreen (pun intended) for legitimizing recreational use.]
In the practice of emergency medicine I frequently see the consequences of the choices people make about the use and abuse of tobacco and alcohol. Much has been written comparing marijuana with those two legal drugs. Even an overview of those comparisons would take a book chapter rather than an essay. For example, the long-term health effects of tobacco and alcohol are well understood. Marijuana, not so much.
A column ("The End of the War on Marijuana") published at CNN.com caught my eye early this morning. The writer, Roger Roffman, is a professor emeritus of social work at the University of Washington and a supporter of the legalization measure in that state. Roffman made two observations that I found especially noteworthy:
Far too many teens think smoking pot is "no big deal," greatly underestimating the risk of being derailed from social, psychological and educational attainment. Far too many adults don't take seriously enough the risk of marijuana dependence that accompanies very frequent use.Roffman has written books on medicinal marijuana and the treatment of marijuana dependence, and his next book (titled A Marijuana Memoir) is sure to be interesting. It's my sense that he knows his subject well, and so his observations worry me more than a little.
One of the aspects of substance use that I often think about is direct cost to the consumer. While the price of a pack of cigarettes varies with state taxes, not to mention choice of brand versus generic or purchase by the pack or carton, I have a pretty good idea what people are spending on tobacco when they tell me how much they smoke. I cannot help being amazed that some people whose resources are very limited choose to spend thousands of dollars a year on cigarettes.
It was my sense that prices for marijuana are quite variable, and a Web search for information proved that to be correct. I found a wide range of reported "retail prices," mostly between $200 and $1,000 per ounce. Knowing next to nothing about smoking the stuff, I did more research to discover that people typically roll joints containing anywhere from one half to one gram each, which means one might get anywhere from 30-60 joints from an ounce. At $300/ounce, a joint might cost $5 to $10. This was an eye opener for me. I had figured marijuana was much more expensive than tobacco per cigarette, but I didn't realize one joint cost as much as a pack of cigarettes.
Of course, users don't typically smoke 20 joints per day. As I browsed the Web, the self-reported patterns I found for regular users were mostly in the range of 1-5 grams per day, with one person claiming 15 grams per day when he had large amounts of money (and could afford to spend large amounts of time completely disconnected from reality).
So the bottom line on the expense for regular users seems to be about the same as for a pack-a-day cigarette smoker at the low end (one joint per day) to much more than that (perhaps $50/day) for heavy users. I'm guessing the heavier users likely are able to get better prices by going up the supply chain and paying wholesale prices. Smoke what you want and sell the rest.
We know a lot about the damaging effects of smoking tobacco on health: heart attack, stroke, peripheral arterial disease, COPD, and lung cancer, just to name the most common problems. We know far less about what regular heavy smoking of marijuana does to the lungs, but I think we are likely to have much more data in the years to come.
We also know a good deal about the harmful effects of alcohol, including disease of the liver and other organs in the digestive system as well as the brain. In emergency medicine, while we see plenty of that, we see a great deal of trouble caused by acute intoxication, especially motor vehicle crashes.
So I wanted to find out how smoking a reefer might compare with alcohol consumption for getting behind the wheel. As you might imagine, the data are not abundant (yet), but here is what the National Highway Traffic Safety Administration (NHTSA) says:
Marijuana has been shown to impair performance on driving simulator tasks and on open and closed driving courses for up to approximately 3 hours. Decreased car handling performance, increased reaction times, impaired time and distance estimation, inability to maintain headway, lateral travel, subjective sleepiness, motor incoordination, and impaired sustained vigilance have all been reported. Some drivers may actually be able to improve performance for brief periods by overcompensating for self-perceived impairment. The greater the demands placed on the driver, however, the more critical the likely impairment. Marijuana may particularly impair monotonous and prolonged driving. Decision times to evaluate situations and determine appropriate responses increase. Mixing alcohol and marijuana may dramatically produce effects greater than either drug on its own.I must admit I find this more than a little scary if legalization makes smoking and driving anywhere near as common as drinking and driving is now. And drinking and driving would likely be much more common if it were not for fairly strict enforcement of laws against that. Will we have laws against driving under the influence of marijuana in all the states where recreational use is legalized? How will such laws be enforced? There is no recognized field sobriety test or breath test, and the correlation between blood levels and clinical effects is very uncertain.
One news story about the new law in Colorado said the state expected, through the regulation of sale and the imposition of taxes, to bring about $60 million a year into the state treasury, while saving about $75 million in costs to the penal system associated with the criminalization of sale, possession, and use of marijuana. That's a lot of money going to the state's bottom line.
The obvious question: will it be worth it? The answer: we don't know, because we have no way to calculate the cost of much more widespread use resulting from its effects on health and behavior. If the record of tobacco and alcohol is even modestly instructive, we may be in for some rude surprises. As my regular readers know, I think we do an awful lot of things in our society as if we have never heard of the Law of Unintended Consequences.
Friday, November 2, 2012
The Electoral College: Why or Why Not?
Every four years when we go to the polls to vote for a presidential candidate, most of us are dimly aware that we are really voting for presidential electors and that weeks later they will meet as the Electoral College and cast their ballots. We don't think too much about it, because the Electoral College usually reflects the will of the American people as expressed in the popular vote totals.
But there is always the possibility that the popular vote and the Electoral College vote will go in opposite directions. Many say that was the case in 2000, when by most accounts Al Gore received more nationwide popular votes than did George W. Bush, who won the Electoral College (after the dispute over Florida's vote was settled by the U.S. Supreme Court). There are enough doubts about the popular vote totals, including such questions as counting of absentee ballots, that the picture for 2000 is not entirely clear. Suffice it to say that it was a very close popular vote, and by that measure the winner may well have been Mr. Gore.
But that isn't what determines the winner. Instead, the popular vote of each state determines the votes of its electors (except Nebraska and Maine, which split their votes if the statewide winner does not also win each congressional district). And that's why it is possible to win the nationwide popular vote but not the Electoral College vote (or the reverse).
There are 538 votes in the Electoral College, and a candidate needs a majority (meaning 270) to win. So, imagine that the vote is 270-268. Imagine further that the candidate with 270 votes won his states with slim majorities, but the candidate with 268 votes won his states by landslide votes. Obviously, the "losing" candidate would then have a hefty popular vote majority. While that sort of thing has never happened, it is quite common for popular vote and electoral vote majorities to be widely disparate. For example, in 1984, Ronald Reagan won 58.8% of the popular vote and 97.6% of the electoral votes. In 1968, Richard Nixon defeated Hubert Humphrey by a mere 0.7% in the popular vote, while the electoral vote was 301-191 (with 46 for third-party candidate George Wallace).
If you are a serious numbers cruncher (like the folks at - where else? - MIT), you could do the math and find out that two candidates could achieve an approximate tie in the popular vote, while achieving an exact tie in the electoral vote, or one could win 538-0, or anywhere in between.
This year both the electoral vote and the popular vote could be very close, and they could easily go in opposite directions. So that tells us the first objection to the Electoral College: why should we have a system in which the nationwide popular vote does not determine the winner? That's how it is for the United States House and Senate - at least since we adopted the 17th Amendment to the Constitution and decided to elect senators directly instead of through our state legislatures.
The origin of the Electoral College is quite simple. Like the composition of the Congress, it was based on a compromise between the more populous and less populous states. Delegates to the Constitutional Convention from the less populous states were afraid that the new Congress would be controlled by representatives from the larger states. The compromise was that representation in the House would be proportional to population, while in the Senate, all states would be equal, with two senators each. The Electoral College is a blend, the number of electors being equal to the number of representatives plus the number of senators. Thus the least populous states (Wyoming, for example), with only one representative in the House, have three electors (because they have two senators, like all states).
Looking at it this way, small (in population) states are "overrepresented" in the Electoral College. Wyoming has 3 electors; if California, the most populous state, had a number of electors corresponding to population ratio (relative to Wyoming), it would have 199 instead of 55. Of course that is the extreme spread, and most of the overrepresentation for states with fewer people is not so dramatic. But this overrepresentation is perceived as a violation of the "one person, one vote" principle and therefore antidemocratic.
The Electoral College has other antidemocratic effects. Some would say that elections bring the opportunity to connect the candidates with the voters, and that connection is important to the expression of the will of the people. Ask anyone who has been in Iowa or New Hampshire at the beginning of the primary season about a sense of direct connection. (For most of us, there is little sense of such connection. The only time I met a presidential candidate in person, he wasn't even a candidate yet. It was 1964, at the Democratic National Convention in Atlantic City, and the candidate (for 1968) was Bobby Kennedy.) But what is the effect of the Electoral College?
Do you think Obama and Romney are spending much time in California or New York, where the polls show Obama with huge leads? No, they are in "battleground states," where the polling is close. You want to see these candidates? Live in a 50-50 state with plenty of electoral votes. (My personal view is thanks but no thanks, because it really fouls up traffic when they are in town.) So big states and small states can be equally disadvantaged. The candidates ignore New York and California, not to mention Texas and Illinois, every bit as much as they ignore Wyoming or the Dakotas.
A recent article on CNN.com noted that people in Hawaii don't vote in presidential elections - at least not as much as people in the other 49 states. In 2008, voter turnout there was 48.8%, compared to Minnesota, at the top of the heap, with 77.8%. Say what you will about the "beach bum" mentality some residents of the Aloha State may have, I think the reason is very simple. It's difficult to find the motivation to vote when it doesn't matter. Hawaii has few electoral votes, they almost always go to the more liberal of our major parties, and because of the time zone, the race has very often been projected before the polls close there.
How likely are you to vote if you think it really doesn't matter? I live in Pennsylvania, where the polls are close this year. But I can tell you it's really tough to find the motivation to vote in the primary, because the races for the presidential nominations have almost always been settled by late spring, when ours is scheduled. If you need to feel like it matters, are you more likely to vote if you live in Ohio, and you keep hearing about how close the race is and how no Republican has ever won the White House without winning Ohio? Or New York, where polling shows Obama with a 25-point lead?
If the Electoral College discourages people who don't live in "battleground states" from going to the polls, that is a bad thing. If abolishing it would have the opposite effect, maybe that's worth some serious consideration.
Remember, that would require a constitutional amendment. After passing both houses of Congress, it would have to be ratified by 38 state legislatures. Won't 13 or more of the least populous states vote against it to preserve their overrepresentation? Yep. It's a long shot.
But there is always the possibility that the popular vote and the Electoral College vote will go in opposite directions. Many say that was the case in 2000, when by most accounts Al Gore received more nationwide popular votes than did George W. Bush, who won the Electoral College (after the dispute over Florida's vote was settled by the U.S. Supreme Court). There are enough doubts about the popular vote totals, including such questions as counting of absentee ballots, that the picture for 2000 is not entirely clear. Suffice it to say that it was a very close popular vote, and by that measure the winner may well have been Mr. Gore.
But that isn't what determines the winner. Instead, the popular vote of each state determines the votes of its electors (except Nebraska and Maine, which split their votes if the statewide winner does not also win each congressional district). And that's why it is possible to win the nationwide popular vote but not the Electoral College vote (or the reverse).
There are 538 votes in the Electoral College, and a candidate needs a majority (meaning 270) to win. So, imagine that the vote is 270-268. Imagine further that the candidate with 270 votes won his states with slim majorities, but the candidate with 268 votes won his states by landslide votes. Obviously, the "losing" candidate would then have a hefty popular vote majority. While that sort of thing has never happened, it is quite common for popular vote and electoral vote majorities to be widely disparate. For example, in 1984, Ronald Reagan won 58.8% of the popular vote and 97.6% of the electoral votes. In 1968, Richard Nixon defeated Hubert Humphrey by a mere 0.7% in the popular vote, while the electoral vote was 301-191 (with 46 for third-party candidate George Wallace).
If you are a serious numbers cruncher (like the folks at - where else? - MIT), you could do the math and find out that two candidates could achieve an approximate tie in the popular vote, while achieving an exact tie in the electoral vote, or one could win 538-0, or anywhere in between.
This year both the electoral vote and the popular vote could be very close, and they could easily go in opposite directions. So that tells us the first objection to the Electoral College: why should we have a system in which the nationwide popular vote does not determine the winner? That's how it is for the United States House and Senate - at least since we adopted the 17th Amendment to the Constitution and decided to elect senators directly instead of through our state legislatures.
The origin of the Electoral College is quite simple. Like the composition of the Congress, it was based on a compromise between the more populous and less populous states. Delegates to the Constitutional Convention from the less populous states were afraid that the new Congress would be controlled by representatives from the larger states. The compromise was that representation in the House would be proportional to population, while in the Senate, all states would be equal, with two senators each. The Electoral College is a blend, the number of electors being equal to the number of representatives plus the number of senators. Thus the least populous states (Wyoming, for example), with only one representative in the House, have three electors (because they have two senators, like all states).
Looking at it this way, small (in population) states are "overrepresented" in the Electoral College. Wyoming has 3 electors; if California, the most populous state, had a number of electors corresponding to population ratio (relative to Wyoming), it would have 199 instead of 55. Of course that is the extreme spread, and most of the overrepresentation for states with fewer people is not so dramatic. But this overrepresentation is perceived as a violation of the "one person, one vote" principle and therefore antidemocratic.
The Electoral College has other antidemocratic effects. Some would say that elections bring the opportunity to connect the candidates with the voters, and that connection is important to the expression of the will of the people. Ask anyone who has been in Iowa or New Hampshire at the beginning of the primary season about a sense of direct connection. (For most of us, there is little sense of such connection. The only time I met a presidential candidate in person, he wasn't even a candidate yet. It was 1964, at the Democratic National Convention in Atlantic City, and the candidate (for 1968) was Bobby Kennedy.) But what is the effect of the Electoral College?
Do you think Obama and Romney are spending much time in California or New York, where the polls show Obama with huge leads? No, they are in "battleground states," where the polling is close. You want to see these candidates? Live in a 50-50 state with plenty of electoral votes. (My personal view is thanks but no thanks, because it really fouls up traffic when they are in town.) So big states and small states can be equally disadvantaged. The candidates ignore New York and California, not to mention Texas and Illinois, every bit as much as they ignore Wyoming or the Dakotas.
A recent article on CNN.com noted that people in Hawaii don't vote in presidential elections - at least not as much as people in the other 49 states. In 2008, voter turnout there was 48.8%, compared to Minnesota, at the top of the heap, with 77.8%. Say what you will about the "beach bum" mentality some residents of the Aloha State may have, I think the reason is very simple. It's difficult to find the motivation to vote when it doesn't matter. Hawaii has few electoral votes, they almost always go to the more liberal of our major parties, and because of the time zone, the race has very often been projected before the polls close there.
How likely are you to vote if you think it really doesn't matter? I live in Pennsylvania, where the polls are close this year. But I can tell you it's really tough to find the motivation to vote in the primary, because the races for the presidential nominations have almost always been settled by late spring, when ours is scheduled. If you need to feel like it matters, are you more likely to vote if you live in Ohio, and you keep hearing about how close the race is and how no Republican has ever won the White House without winning Ohio? Or New York, where polling shows Obama with a 25-point lead?
If the Electoral College discourages people who don't live in "battleground states" from going to the polls, that is a bad thing. If abolishing it would have the opposite effect, maybe that's worth some serious consideration.
Remember, that would require a constitutional amendment. After passing both houses of Congress, it would have to be ratified by 38 state legislatures. Won't 13 or more of the least populous states vote against it to preserve their overrepresentation? Yep. It's a long shot.
Tuesday, October 30, 2012
Alas, Poor Newsweek
Next year Newsweek magazine will celebrate the 80th anniversary of its founding. Earlier this month it was announced that the print edition will not reach that anniversary. The magazine is going digital-only and will be called Newsweek Global.
Over the years I have subscribed, at various times, to the three traditional print news weeklies: Time, Newsweek, and US News & World Report. As I reflect upon how long it has been since I received a print copy of any of the three of them, I realize it shouldn't surprise me in the least that Newsweek is succumbing to a changing market. In the magazine business it is referred to as a "challenging" situation for print advertising. Challenging, indeed.
It is difficult to make money on print advertising when print circulation is declining. Furthermore, it is easy for advertisers to tell whether online readers actually pay attention to the ads, because they click on them. No click means the ad was ignored. You just can't tell, with a print ad, whether the reader even noticed.
[It is interesting, then, that in some lines of publishing - medical journals being an example - advertisers still want print ads in print journals, even though they know more and more readers are reading the journals electronically. I'm still trying to figure that out, and I suspect they are, too.]
I cannot help wondering what the long-term future of news publications holds. There are so many sources of news and opinion online that are free (once you have Internet access and a device that allows you to connect and read). People will surely expect it to be free, and if you want to charge for it, you will have to offer something of value readers cannot get elsewhere. Otherwise they'll be thinking, hey, if your competitors can make it on ad revenues alone, why can't you?
When the New York Times started charging for online content, I started looking elsewhere for high-quality news reporting. I thought, gee, if The Washington Post is still free, why should I pay for the Times? It's not like their journalism is clearly superior. They may be called the "newspaper of record," but they are also the newspaper of Jason Blair. Woodward and Bernstein wrote for the Post.
By nature I am conservative, a creature of habit who prefers the traditional and opposes change that seems to have no impetus beyond a desire for change. However, as much as I am a creature of habit, even my habits change. And sometimes they have help from unexpected sources.
I used to read a local newspaper. Then we got a new carrier. He refused to put the paper in the delivery box. It was too much trouble. Emails to the home delivery department of the paper resulted in his putting the paper in the box for a week or so, and then he'd go back to tossing it in the driveway. The problem was inclement weather, in which a plastic wrapper was inadequate to the task of keeping the paper from becoming an unreadable, sodden mass by the time I got home. A long series of emails, each followed by an all too transient change in delivery and then a return to bad behavior, led to the end of my print subscription.
I now read local news - when I want to subject myself to bad reporting and worse writing - online, without paying for a subscription.
Two years ago Newsweek merged with The Daily Beast, a news and opinion Web site. Selected content from the new Newsweek Global will be made available via The Daily Beast.
Full subscription to Newsweek Global will be paid. It will be interesting to see whether people will be willing to pay the extra money for that. Maybe I'll start reading The Daily Beast to see what is being done under the editorial leadership of Tina Brown.
So far I haven't done that, largely because it never occurred to me that something with that name could be taken seriously.
The target readership for Newsweek Global will be "a highly mobile, opinion-leading audience who want to learn about world events in a sophisticated context." I'm not sure how highly mobile I am, because I've been living in the same township for 27 years. I like being an opinion leader, however. (Except when the drug companies want my help with marketing. Then I just tell them they won't like my opinons.) I desperately want to learn all I can about world events, and I long for sophistication. So I may just have to check this out. Except for that one nagging drawback: they'll expect me to pay for it. Hmmm. Maybe they'll have an introductory offer that will be too good to pass up.
I will still have to find something else to read in the barbershop. Popular Mechanics, anyone?
Over the years I have subscribed, at various times, to the three traditional print news weeklies: Time, Newsweek, and US News & World Report. As I reflect upon how long it has been since I received a print copy of any of the three of them, I realize it shouldn't surprise me in the least that Newsweek is succumbing to a changing market. In the magazine business it is referred to as a "challenging" situation for print advertising. Challenging, indeed.
It is difficult to make money on print advertising when print circulation is declining. Furthermore, it is easy for advertisers to tell whether online readers actually pay attention to the ads, because they click on them. No click means the ad was ignored. You just can't tell, with a print ad, whether the reader even noticed.
[It is interesting, then, that in some lines of publishing - medical journals being an example - advertisers still want print ads in print journals, even though they know more and more readers are reading the journals electronically. I'm still trying to figure that out, and I suspect they are, too.]
I cannot help wondering what the long-term future of news publications holds. There are so many sources of news and opinion online that are free (once you have Internet access and a device that allows you to connect and read). People will surely expect it to be free, and if you want to charge for it, you will have to offer something of value readers cannot get elsewhere. Otherwise they'll be thinking, hey, if your competitors can make it on ad revenues alone, why can't you?
When the New York Times started charging for online content, I started looking elsewhere for high-quality news reporting. I thought, gee, if The Washington Post is still free, why should I pay for the Times? It's not like their journalism is clearly superior. They may be called the "newspaper of record," but they are also the newspaper of Jason Blair. Woodward and Bernstein wrote for the Post.
By nature I am conservative, a creature of habit who prefers the traditional and opposes change that seems to have no impetus beyond a desire for change. However, as much as I am a creature of habit, even my habits change. And sometimes they have help from unexpected sources.
I used to read a local newspaper. Then we got a new carrier. He refused to put the paper in the delivery box. It was too much trouble. Emails to the home delivery department of the paper resulted in his putting the paper in the box for a week or so, and then he'd go back to tossing it in the driveway. The problem was inclement weather, in which a plastic wrapper was inadequate to the task of keeping the paper from becoming an unreadable, sodden mass by the time I got home. A long series of emails, each followed by an all too transient change in delivery and then a return to bad behavior, led to the end of my print subscription.
I now read local news - when I want to subject myself to bad reporting and worse writing - online, without paying for a subscription.
Two years ago Newsweek merged with The Daily Beast, a news and opinion Web site. Selected content from the new Newsweek Global will be made available via The Daily Beast.
Full subscription to Newsweek Global will be paid. It will be interesting to see whether people will be willing to pay the extra money for that. Maybe I'll start reading The Daily Beast to see what is being done under the editorial leadership of Tina Brown.
So far I haven't done that, largely because it never occurred to me that something with that name could be taken seriously.
I know one shouldn't judge a book by its cover, but how can one not judge a publication when its creators give it a name like that?
The target readership for Newsweek Global will be "a highly mobile, opinion-leading audience who want to learn about world events in a sophisticated context." I'm not sure how highly mobile I am, because I've been living in the same township for 27 years. I like being an opinion leader, however. (Except when the drug companies want my help with marketing. Then I just tell them they won't like my opinons.) I desperately want to learn all I can about world events, and I long for sophistication. So I may just have to check this out. Except for that one nagging drawback: they'll expect me to pay for it. Hmmm. Maybe they'll have an introductory offer that will be too good to pass up.
I will still have to find something else to read in the barbershop. Popular Mechanics, anyone?
Subscribe to:
Posts (Atom)