A teenager named Michael Brown was shot dead by a police officer in Missouri last summer. He was black. The officer was white. The youth was unarmed. Conclusions were drawn that were likely inevitable in a country in which race relations have their roots in the enslavement of Africans, brought to the New World to perform hard, forced labor.
Was this really about skin color?
Rewind to the early 1970s.
I was a teenager in Philadelphia, a student at an academic high school then considered one of the best in the country. One day after school I was waiting at the bus stop on the edge of the school's grounds. Some time in the last day or so a car had struck the pole holding the bus stop sign, which was now bent over to an angle of about sixty degrees with the pavement. I straddled the pole and leaned back against it, there reclining while awaiting the bus.
A police cruiser stopped, and the officer, disturbing my reverie, barked, "Get off that pole!" I looked in his direction, perplexed and a bit annoyed. "What?"
"Get off that pole! That's destruction of public property!"
I said the '70s equivalent of "Dude? Seriously?" - explaining that the pole had obviously been hit by a car, and it was ridiculous to think that I had the strength to bend this solid metal cylinder.
The officer was disinclined to engage in thoughtful discussion. I had questioned his authority. He ordered me into the cruiser and drove to a police station, where I was placed in a holding cell. A few hours later my mother appeared to pick me up and take me home.
I was not arrested or charged, and I acquired no record of violating the law. I did, however, learn a valuable lesson about what is, and what is not, a sensible way of responding to an order from a police officer. Questioning authority is risky.
A young man in Missouri last summer was told by a police officer to get off the street and walk on the sidewalk instead. His response, like mine, was to question authority. I could have told him that was a bad idea.
The sequence of events that followed was tragic - and resulted in the young man's death. A grand jury spent many hours hearing testimony and reviewing physical evidence and drew the conclusion that there was insufficient evidence to indict the officer. I have not heard the testimony or reviewed the evidence myself, and so I cannot draw an independent conclusion. News reports suggest to me, however, that the investigation was thorough, and our legal system must be trusted to reach conclusions that are fair, even if they do not satisfy everyone - indeed, even if they satisfy no one.
The reaction in Missouri to the shooting last summer included looting and rioting. The reaction to the the grand jury's conclusions has been worse: more rioting, more looting, numerous cars and buildings set ablaze. The property damage will likely total in the millions of dollars. The damage to the collective psyche of the community cannot be tallied.
The population of Ferguson, Missouri is about two thirds black. The police department, nearly all white, has only a small handful of black officers. The fact that the officer was white and the young man questioning his authority was black was, as a matter of statistical probability, to be expected. One can argue that a police force should mirror the community it serves. But often it does not, and yet the police and the citizens must still engage each other in ways that are positive and constructive.
Michael Brown, according to the accounts available to the public, not only questioned authority but responded in a hostile, aggressive manner. There was a scuffle. Brown's conduct was described as threatening. The officer said he feared for his physical safety. Brown was a big fellow, with an intimidating physique. His size, and his behavior, were such that the fact that he was unarmed did not figure into the officer's reactions.
Michael Brown has been variously described as a congenial and docile boy and as a thuggish delinquent. We have all read the news accounts and have drawn at least some tentative conclusions of our own as to what he was really like. But we did not know him, and we did not know officer Darrin Wilson.
It is easy to believe that what happened between Brown and Wilson was largely determined by the color of their skin, or by the tension between law enforcement and the community when the police department and the citizenry are demographically on opposite sides of a racial divide.
I do not doubt that race can be a factor in such incidents. But this particular incident started with a response by a teenager to a command from an officer of the law. That response was not one of respectful obedience. A response by a teenager to a police officer that is not one of respectful obedience is a bad idea. I learned that the easy way, if spending a few hours in a holding cell and then having to explain oneself to a mother who was every bit as authoritarian as the police officer can be described as easy.
But compared to what happened in Ferguson, Missouri last summer, my lesson came cheap. Michael Brown is dead. And last night Ferguson was ablaze. It may be partly about race - anyone who says race played no role is naive and will be branded a fool. But it is also about teaching our children that the command, "Question authority" comes with a responsibility to do so prudently. It is not something to do in interactions with police officers to whom we have assigned the responsibility to exercise authority in maintaining order in our communities - and who must spend all day, every day, dealing with persons who question authority, often in belligerent ways, and sometimes in ways that put an officer's personal safety at risk.
We have much to do in this country to bridge the racial divide whose roots go back to the slave trade. Thomas Jefferson, prescient in his foretelling of the Civil War, said the question of slavery awakened him with terror, "like a fire bell in the night."
A century and a half after the Civil War, that bell is still ringing, and the fires are still blazing, this time in Ferguson, Missouri. We had better get to work.
Tuesday, November 25, 2014
Monday, September 15, 2014
The Star Spangled Banner
Lately I have seen some graphics posted in various social media about "The Star Spangled Banner," specifically about the notion that its place as our national anthem should be reconsidered. On Facebook I have been repeatedly urged to "like" a graphic if I think we should keep it.
I can't say I ever gave much thought to whether it should or shouldn't be the national anthem. I am a bit of a history buff, so I always found interesting the fact that the lyric was composed by a lawyer (Francis Scott Key) held prisoner aboard a British ship in Baltimore Harbor during the War of 1812, with "bombs bursting in air" and "rockets' red glare" over Fort McHenry.
As many of my readers know, but some likely have forgotten (or never learned), the lyric was much longer (four stanzas) than what we usually hear or sing (just the first).
This past weekend was the bicentennial of Key's composition of the lyric. While there were surely some festivities in Baltimore, if there was much attention paid to the fact that "The Star Spangled Banner" is now 200 years old in the rest of the country, I missed it.
Some years ago a junior reporter (from the New York Times, if memory serves) conducted a person-on-the-street survey, asking people questions about our national anthem. The format was multiple choice, which means the answers had to be only at what educators call "recognition level" in the minds of respondents. In other words, it's easier to pick Francis Scott Key's name from several possible choices than it is to remember it if the question is fill-in-the-blank.
Not surprisingly, most people could do that. But the results were much worse when they were asked during what war it was written, the occupation of the man who wrote the lyric, or what harbor the ship was in while he was being held prisoner on it and was inspired to compose. As I recall, the War of 1812 came in last, behind several other choices, and so did Baltimore Harbor, despite the fact that one of the other choices was Omaha, which is on the Missouri River (and has another river, the Platte, to its west) but doesn't actually have a harbor. Baltimore also trailed San Francisco, but if you picked the wrong war, you probably wouldn't know San Francisco wasn't a city in 1812 and that as a city its name was Yerba Buena until the Mexican War (1840s), as a result of which it became part of the United States.
The more I thought about it, the more I realized that our attachment to our national anthem might be based on nothing more than sentimentality. Just as most people didn't seem to know it was composed during a battle in the War of 1812, I doubt many are aware it was officially adopted as our national anthem in 1931. I shudder to think how many people never really give any thought to the meaning of the words. The lyric is about how our flag flew over Fort McHenry during the battle and served as a symbol of this fledgling country's determination to endure and to survive what was, in essence, its second war for independence from Great Britain.
We do have a certain obsession with our flag. As a child, not only did I have to sing the national anthem from time to time - which for a boy who (according to my sister) needed a basket to carry a tune often meant mouthing the words while others sang - but I recited the Pledge of Allegiance (to that star spangled banner) every morning in school. I have no trouble understanding this. Frankly, I think our national banner, spangled as it is with white stars on a blue field accompanied by 13 red and white stripes, is the best looking of all national flags. I admit to a certain nationalistic bias, but that's what I think, just the same.
But the idea that we should replace it with another song as our anthem has been around for quite some time. People have complained that it's too hard to sing, and even though we typically sing only the first of the four stanzas, many still have trouble remembering the words. Furthermore, in the last generation or two it has been fashionable to show a distinct lack of respect for our flag. We all know about the controversy over the words "under God" in the Pledge, but as long as I can remember there have been people who didn't want to pledge allegiance to a flag at all. Maybe "to the republic for which it stands," but not to the flag itself. People have insisted on making articles of clothing out of the flag, or at least wearing articles of clothing made to look like that, which is deeply frowned upon by traditionalists and is technically a violation of US law (the Flag Code), although the Supreme Court has told us violations of the Flag Code are protected by the First Amendment. And of course the extreme version of disrespect, burning the flag, is also a form of expression protected by the First Amendment.
Yet, despite all this belittling of our flag, we continue to pledge allegiance to it and (at least try to) sing Francis Scott Key's ode to it in his lyric. I'm convinced it's a matter of tradition. Frankly, I even think at least a little bit of the opposition to statehood for Puerto Rico or the District of Columbia stems from the fact that people like the flag with fifty stars and don't want to see it changed. (But, hey, if the Big Ten athletic conference can keep that name through the addition of teams - now up to fourteen! - we wouldn't really have to add more stars.)
Having thus decided that tradition and inertia account for our attachment to it, I looked about for reasons to change it - perhaps to "America the Beautiful," a popular homage to this great land that one could argue is about our nation and not about its flag, or war. (The trouble with that song may be its repeated references to God, a drawback also to "God Bless America," sure to raise the ire of those who insist upon "freedom from religion." Even "Hail, Columbia," which was often used as an anthem before the official adoption of "The Star Spangled Banner" in 1931, is a song about war with religious overtones and mention of God.)
Edward ("Ted") Widmer, an historian (Harvard Ph.D.), wrote an illuminating article on the subject for the online magazine Politico, in which, after the obligatory mention of the music being that of an old English drinking song rather than composed originally for our anthem, he gets to something of real substance: namely that Key was a slaveholder (and vigorously defended the "peculiar institution") and that the lyric (in the third stanza) makes reference to fugitive slaves who fought with the British in that war.
Widmer does a fine job of assembling the arguments in favor of change and seems to favor "America the Beautiful" as a replacement. Although very much a traditionalist myself, I would not object to serious consideration of doing that. I would, however, insist that the lyric be adopted in present form, with no mucking around to get rid of the phrase "God shed his grace on thee." Given that we live in a time of tension between Christians and atheists, the former would surely agree with me on that, and the latter would strenuously object. So I think "The Star Spangled Banner" is going to remain our national anthem.
I can't say I ever gave much thought to whether it should or shouldn't be the national anthem. I am a bit of a history buff, so I always found interesting the fact that the lyric was composed by a lawyer (Francis Scott Key) held prisoner aboard a British ship in Baltimore Harbor during the War of 1812, with "bombs bursting in air" and "rockets' red glare" over Fort McHenry.
As many of my readers know, but some likely have forgotten (or never learned), the lyric was much longer (four stanzas) than what we usually hear or sing (just the first).
This past weekend was the bicentennial of Key's composition of the lyric. While there were surely some festivities in Baltimore, if there was much attention paid to the fact that "The Star Spangled Banner" is now 200 years old in the rest of the country, I missed it.
Fort McHenry |
Not surprisingly, most people could do that. But the results were much worse when they were asked during what war it was written, the occupation of the man who wrote the lyric, or what harbor the ship was in while he was being held prisoner on it and was inspired to compose. As I recall, the War of 1812 came in last, behind several other choices, and so did Baltimore Harbor, despite the fact that one of the other choices was Omaha, which is on the Missouri River (and has another river, the Platte, to its west) but doesn't actually have a harbor. Baltimore also trailed San Francisco, but if you picked the wrong war, you probably wouldn't know San Francisco wasn't a city in 1812 and that as a city its name was Yerba Buena until the Mexican War (1840s), as a result of which it became part of the United States.
The medal on the right reminds us this was not a one-year war. |
We do have a certain obsession with our flag. As a child, not only did I have to sing the national anthem from time to time - which for a boy who (according to my sister) needed a basket to carry a tune often meant mouthing the words while others sang - but I recited the Pledge of Allegiance (to that star spangled banner) every morning in school. I have no trouble understanding this. Frankly, I think our national banner, spangled as it is with white stars on a blue field accompanied by 13 red and white stripes, is the best looking of all national flags. I admit to a certain nationalistic bias, but that's what I think, just the same.
A Canadian friend once called out "God Bless America" upon seeing a woman dressed like this. |
This kind of flag burning, a flag retirement ceremony, is appropriate for worn-out flags. |
Having thus decided that tradition and inertia account for our attachment to it, I looked about for reasons to change it - perhaps to "America the Beautiful," a popular homage to this great land that one could argue is about our nation and not about its flag, or war. (The trouble with that song may be its repeated references to God, a drawback also to "God Bless America," sure to raise the ire of those who insist upon "freedom from religion." Even "Hail, Columbia," which was often used as an anthem before the official adoption of "The Star Spangled Banner" in 1931, is a song about war with religious overtones and mention of God.)
Edward ("Ted") Widmer, an historian (Harvard Ph.D.), wrote an illuminating article on the subject for the online magazine Politico, in which, after the obligatory mention of the music being that of an old English drinking song rather than composed originally for our anthem, he gets to something of real substance: namely that Key was a slaveholder (and vigorously defended the "peculiar institution") and that the lyric (in the third stanza) makes reference to fugitive slaves who fought with the British in that war.
Widmer does a fine job of assembling the arguments in favor of change and seems to favor "America the Beautiful" as a replacement. Although very much a traditionalist myself, I would not object to serious consideration of doing that. I would, however, insist that the lyric be adopted in present form, with no mucking around to get rid of the phrase "God shed his grace on thee." Given that we live in a time of tension between Christians and atheists, the former would surely agree with me on that, and the latter would strenuously object. So I think "The Star Spangled Banner" is going to remain our national anthem.
Saturday, September 6, 2014
A Marxist Minimum Wage?
This morning I read a comment on social media about the minimum wage - specifically about the notion that some fast food workers are asserting they should be paid $15/hour. This is a fascinating subject in economics. Not just because economics is fascinating - which it is to me, but not to most people - but because it's one of those issues in economics about which virtually everyone has an opinion.
Those who support a higher minimum wage note that it is not a "living wage." Certainly someone earning minimum wage with no benefits would be challenged to support himself at today's price levels, never mind any dependents. But of course life is not that simple, because a person with that level of income may be eligible for public assistance in various forms, including Medicaid, housing subsidies, SNAP (food stamps), etc.
Those opposed suggest that minimum wage is not really supposed to be a living wage, not when it is paid to people in entry-level jobs often held by teenagers or part-time workers adding to a family's income. Trying to figure out just how many people there are who are really struggling to support themselves as independent adults at minimum wage jobs is a bit of a challenge. But it seems to be enough that we can agree we are not just talking about burger-flipping high school kids making money to pay for their social lives and give their parents a break.
Karl Marx is famous for (among other things) having popularized the expression, "From each according to his ability; to each according to his need." In the conception of many, this means we should all contribute to society in the manner of which we are capable, and our economic reward should be based on our need for sustenance. Thus if I have the innate ability to master the knowledge and skills required to practice medicine and my neighbor's natural attributes are suitable for driving a bus, those are the things we should do. But my needs in life are inherently no greater than his, and so my income should not be (as it is in the United States) substantially higher.
How does such an economic model work with human nature? I've never driven a bus, but I have driven a truck (delivering newspapers), and I can tell you it's much less mentally demanding work than practicing medicine, and there are some days when I'd gladly switch to doing that if it paid the same as being a doctor. It wouldn't be intellectually challenging or rewarding, and in some ways it is physically more strenuous, but overall I'd have to say it was a much easier job.
In our economy, with market forces, one of the things motivating people to enter professions that require high levels of intellectual ability and many years of education and training is money. Those professions are financially rewarding. Are there plumbers who earn as much as pediatricians? Sure. But, by and large, the jobs that require a lot of smarts bring bigger bucks.
We want the best and brightest of our nation's youth to enter professions like medicine. (We can argue about some other high-paying professions. For example, there seems to be general agreement that we need more engineers and not so many lawyers, so maybe we should look at incomes for those professions.) If physicians are paid no better than manual laborers, how will we create a society in which young people with the innate ability to become doctors choose that path?
Thus, the first - and arguably most important - question is whether the minimum wage should be adjusted so that it is a living wage. That seems a no-brainer, unless there is a downside. Surely there is a downside. Let's look at the worker at the fast food restaurant who earns minimum wage. If that wage is raised - without any increase in revenues to support it - where will the money come from? Will the restaurant simply lower its profit margin? Unlikely, unless there is sufficient competition to make raising prices unwise, and profit margins are currently ample, making a reduction acceptable. But if all fast food restaurants are affected the same way by an increase in the statutory minimum wage, then most likely all will raise prices. (For those of you who are aware of how things are different in other countries, I'm staying away from that because it makes the analysis even more complicated than it already is. For those of you interested in the comparison, see this essay from Denmark.)
This concern can be generalized. If the minimum wage were raised for all, would that not put upward pressure on prices, meaning there would be inflation, with the corresponding reduction in purchasing power, so those earning minimum wage wouldn't really be better off after all? The short answer is no. You see, most of us earn more than minimum wage, so if we raise that wage, the upward pressure on prices is limited because most people's wages are not going up. So there might be a little inflation, with a very modest effect on purchasing power, so those earning minimum wage still benefit substantially, while the negative effect on everyone else is quite small.
What about the burger? If you tell Burger King it has to pay its minimum wage workers 40% more (the effect of raising the minimum wage from $7.25 to $10.10), won't the price of a burger go up 40%? Again, the short answer is no, because the wage paid to employees, especially only those earning minimum wage, is a relatively small contributor to the price of a burger. Would the price of a burger go up? Probably - but probably only a little.
What about the possibility that workers would be dismissed? There may be some sectors in which that is a real possibility. Burger King is not in one of them. BK employs the number of people it takes to do the work. No more, no less. They cannot really schedule fewer workers because they're paying them more, because that would mean worse service and loss of business, unless their workers suddenly became more productive, which is unlikely.
I think this analysis makes it pretty clear that raising the minimum wage for many workers at that level will do far more good than it would have any significant downside.
And that brings me back to the consideration of economic incentives. As is evident from this essay - and more so to my regular readers - I have a special interest in health care. Suppose a new high school grad can get a job flipping burgers for $15/hour. Alternatively, she could enroll in the local community college and get an associate degree to become a paramedic. The second option costs money and takes 15-18 months of serious study. In many places in the US, paramedics don't get paid more than $15/hour. You can say that being a paramedic is much more emotionally rewarding than flipping burgers, but it takes education and training to get the job, and it's hard work. (Also, there are many websites devoted to the frustrations of work in EMS and the abuse of these folks by their patients, so I'm concerned about how many of them would chuck it all for a job at Micky Dee's if the money were no different.)
Notwithstanding our philosophical attraction to egalitarian principles, there is a lot of class warfare in this country. Some of it is between the rich and the poor, and there is much angst about the growing gap between them. The US has unequal distribution of wealth to a degree substantially greater than other First World nations. Corporate CEOs are paid hundreds of times what their line workers earn. But I am most intrigued by the class warfare between those who are much closer to each other - between the poor and the lower middle class, for example. So, the person who earns $15/hour and works hard for that money may find it truly irksome that the fellow flipping burgers should assert a right to the same pay.
But the most serious macroeconomic problem we currently face, in my view, is not that the minimum wage is too low, but that employment is too low. Forget the unemployment rate, which is a meaningless number if ever there was one. Look instead at the labor force participation rate, which has been on a slow but steady decline over the past decade, and which shows that for every five people currently working, there are three others who should be but aren't.
What's the connection? This dismal statistic powerfully influences my view of the minimum wage debate. We can talk about whether the high school kid flipping burgers deserves $15/hour, or even $10/hour, and I won't argue with much enthusiasm for either wage. But should the person actually trying to support himself on minimum wage be paid something that makes that feasible? I look at this person and think, hey, this is somebody who's working, who has an oar in the water. This is a person who is actually trying to hold up his end of the Marxist aphorism: from each according to his ability. If we, as a society, cannot find a way to hold up the other end, and see to it that he gets paid enough to support himself, we had better step back and re-think our economy.
Those who support a higher minimum wage note that it is not a "living wage." Certainly someone earning minimum wage with no benefits would be challenged to support himself at today's price levels, never mind any dependents. But of course life is not that simple, because a person with that level of income may be eligible for public assistance in various forms, including Medicaid, housing subsidies, SNAP (food stamps), etc.
Those opposed suggest that minimum wage is not really supposed to be a living wage, not when it is paid to people in entry-level jobs often held by teenagers or part-time workers adding to a family's income. Trying to figure out just how many people there are who are really struggling to support themselves as independent adults at minimum wage jobs is a bit of a challenge. But it seems to be enough that we can agree we are not just talking about burger-flipping high school kids making money to pay for their social lives and give their parents a break.
Karl Marx is famous for (among other things) having popularized the expression, "From each according to his ability; to each according to his need." In the conception of many, this means we should all contribute to society in the manner of which we are capable, and our economic reward should be based on our need for sustenance. Thus if I have the innate ability to master the knowledge and skills required to practice medicine and my neighbor's natural attributes are suitable for driving a bus, those are the things we should do. But my needs in life are inherently no greater than his, and so my income should not be (as it is in the United States) substantially higher.
How does such an economic model work with human nature? I've never driven a bus, but I have driven a truck (delivering newspapers), and I can tell you it's much less mentally demanding work than practicing medicine, and there are some days when I'd gladly switch to doing that if it paid the same as being a doctor. It wouldn't be intellectually challenging or rewarding, and in some ways it is physically more strenuous, but overall I'd have to say it was a much easier job.
In our economy, with market forces, one of the things motivating people to enter professions that require high levels of intellectual ability and many years of education and training is money. Those professions are financially rewarding. Are there plumbers who earn as much as pediatricians? Sure. But, by and large, the jobs that require a lot of smarts bring bigger bucks.
We want the best and brightest of our nation's youth to enter professions like medicine. (We can argue about some other high-paying professions. For example, there seems to be general agreement that we need more engineers and not so many lawyers, so maybe we should look at incomes for those professions.) If physicians are paid no better than manual laborers, how will we create a society in which young people with the innate ability to become doctors choose that path?
Thus, the first - and arguably most important - question is whether the minimum wage should be adjusted so that it is a living wage. That seems a no-brainer, unless there is a downside. Surely there is a downside. Let's look at the worker at the fast food restaurant who earns minimum wage. If that wage is raised - without any increase in revenues to support it - where will the money come from? Will the restaurant simply lower its profit margin? Unlikely, unless there is sufficient competition to make raising prices unwise, and profit margins are currently ample, making a reduction acceptable. But if all fast food restaurants are affected the same way by an increase in the statutory minimum wage, then most likely all will raise prices. (For those of you who are aware of how things are different in other countries, I'm staying away from that because it makes the analysis even more complicated than it already is. For those of you interested in the comparison, see this essay from Denmark.)
This concern can be generalized. If the minimum wage were raised for all, would that not put upward pressure on prices, meaning there would be inflation, with the corresponding reduction in purchasing power, so those earning minimum wage wouldn't really be better off after all? The short answer is no. You see, most of us earn more than minimum wage, so if we raise that wage, the upward pressure on prices is limited because most people's wages are not going up. So there might be a little inflation, with a very modest effect on purchasing power, so those earning minimum wage still benefit substantially, while the negative effect on everyone else is quite small.
What about the burger? If you tell Burger King it has to pay its minimum wage workers 40% more (the effect of raising the minimum wage from $7.25 to $10.10), won't the price of a burger go up 40%? Again, the short answer is no, because the wage paid to employees, especially only those earning minimum wage, is a relatively small contributor to the price of a burger. Would the price of a burger go up? Probably - but probably only a little.
What about the possibility that workers would be dismissed? There may be some sectors in which that is a real possibility. Burger King is not in one of them. BK employs the number of people it takes to do the work. No more, no less. They cannot really schedule fewer workers because they're paying them more, because that would mean worse service and loss of business, unless their workers suddenly became more productive, which is unlikely.
I think this analysis makes it pretty clear that raising the minimum wage for many workers at that level will do far more good than it would have any significant downside.
And that brings me back to the consideration of economic incentives. As is evident from this essay - and more so to my regular readers - I have a special interest in health care. Suppose a new high school grad can get a job flipping burgers for $15/hour. Alternatively, she could enroll in the local community college and get an associate degree to become a paramedic. The second option costs money and takes 15-18 months of serious study. In many places in the US, paramedics don't get paid more than $15/hour. You can say that being a paramedic is much more emotionally rewarding than flipping burgers, but it takes education and training to get the job, and it's hard work. (Also, there are many websites devoted to the frustrations of work in EMS and the abuse of these folks by their patients, so I'm concerned about how many of them would chuck it all for a job at Micky Dee's if the money were no different.)
Notwithstanding our philosophical attraction to egalitarian principles, there is a lot of class warfare in this country. Some of it is between the rich and the poor, and there is much angst about the growing gap between them. The US has unequal distribution of wealth to a degree substantially greater than other First World nations. Corporate CEOs are paid hundreds of times what their line workers earn. But I am most intrigued by the class warfare between those who are much closer to each other - between the poor and the lower middle class, for example. So, the person who earns $15/hour and works hard for that money may find it truly irksome that the fellow flipping burgers should assert a right to the same pay.
But the most serious macroeconomic problem we currently face, in my view, is not that the minimum wage is too low, but that employment is too low. Forget the unemployment rate, which is a meaningless number if ever there was one. Look instead at the labor force participation rate, which has been on a slow but steady decline over the past decade, and which shows that for every five people currently working, there are three others who should be but aren't.
What's the connection? This dismal statistic powerfully influences my view of the minimum wage debate. We can talk about whether the high school kid flipping burgers deserves $15/hour, or even $10/hour, and I won't argue with much enthusiasm for either wage. But should the person actually trying to support himself on minimum wage be paid something that makes that feasible? I look at this person and think, hey, this is somebody who's working, who has an oar in the water. This is a person who is actually trying to hold up his end of the Marxist aphorism: from each according to his ability. If we, as a society, cannot find a way to hold up the other end, and see to it that he gets paid enough to support himself, we had better step back and re-think our economy.
Friday, July 4, 2014
Hobby Lobby: Time for a Revolution
Well? Should we make Hobby Lobby provide health insurance coverage that pays for birth control?
To begin with, I will remind readers that I wrote about the issue of coverage for birth control by private health insurance over two years ago, making the argument that the economic case for coverage for women who are not indigent is weak. Women who are sufficiently needy that such coverage makes the difference between having and not having access to birth control are typically covered by Medicaid or eligible for low-cost services from Planned Parenthood.
So we are, as a practical matter, not talking about denying women access to products or services based on someone's religious beliefs. What the Supreme Court said in Burwell v. Hobby Lobby is that, under certain circumstances, a company that is "closely held" (private ownership by a small number of people) cannot be required to pay for coverage for birth control the use of which violates their religious beliefs.
The 5-4 majority opinion in this case relies heavily on the Religious Freedom Restoration Act (RFRA), a 20-year-old federal law intended to preclude substantial burdens upon the free exercise of religion. That law was enacted with a mere three dissenting votes in the Senate and not a single "nay" in the House, which is quite remarkable. Surely the Congress had no idea that two decades later the law would be the basis for such a controversial Supreme Court opinion. We have a national legislative body of 535 people who have ceaselessly behaved as though they've never heard of the Law of Unintended Consequences.
The Supreme Court's decision has been lambasted by progressives in every forum imaginable. This I understand, because many people who vigorously advocate in the realm of reproductive health care have very strong feelings about these issues. And I am inclined to agree with them that the effects of this decision are undesirable. But that doesn't mean the decision is incorrect. Examined from the standpoint of legal reasoning, the majority opinion seems to me to be entirely logical. Difficult though it is to separate one's view of a judicial opinion's effect from the logic of its reasoning, when I force myself to engage in the exercise, that is my conclusion.
The logical short-term solution is obvious from the text of the RFRA itself. A federal statute adopted subsequently can explicitly exclude application of the RFRA to its provisions. Problem solved. Can federal law mandating coverage of contraception now be enacted with such explicit language? That is a political question. And the people who have the power to answer that question are the ones going to the polls this November.
Quite willing to be an opportunist, I am now going to take advantage of the opening afforded by this decision to make the case I have advanced in many postings on social media but have not expounded in this blog.
I don't want Hobby Lobby to pay for birth control. I don't want Hobby Lobby to pay for anything. I don't want Hobby Lobby to provide health insurance coverage to its employees. I don't want any employer in the US to provide health insurance coverage to its employees.
In the United States of America it is time for a Declaration of Independence from the absurdly fragmented non-system of financing health care that has plagued us for decades. We have some people covered by employer-based health insurance. We have some people covered by government programs. We have 15% of the population covered by nothing. When we look at all of the people, no matter which group they fall into, it is arguable that the current arrangement serves no one well. Ask doctors, nurses, and the people who run hospitals if they think the status quo is good for any of them or any of their patients. Ask patients - even the ones on Medicare, which has the highest levels of satisfaction - whether everything is just fine and dandy. The Medicare patients who've been socked with big bills, because the government would pay for observation (meaning outpatient) services instead of a hospital admission, will tell you they are none too pleased.
What's the problem here? All of the payers are trying to get away with paying as little as possible for all services provided. They'd like to pay nothing whenever they can get away with that, and leave doctors, patients, and hospitals holding the bag.
The solution? A system that includes everyone: a National Health Service. Everyone is a stakeholder. All patients have the same status, because everyone is a "covered life" on equal footing. There is no health insurance industry, because there is no health insurance. People who work pay taxes. These public funds are used to finance the health care system. There is only one payer. Decisions about what to pay for and how much to pay are made by agencies of the government composed of stakeholders, heavily weighted toward experts: doctors, health policy wonks, health care consumer representatives.
Do you think when decisions are made that way we will have anyone deciding it makes any sense not to pay for birth control? Of course not! Because that is bad public policy and bad health policy, defying all logic and common sense. Whether it violates anyone's religious precepts won't even be part of the discussion. I can hear some readers asking, "What about the Hyde amendment, prohibiting the use of federal funds to pay for abortions through the Medicaid program?" That is yet another example of why we need to scrap the current crazy-quilt non-system and start over, taking these decisions out of the hands of politicians and giving them to governing boards with representation from all groups of stakeholders. When everyone in the nation is a stakeholder, we won't have anyone making decisions that don't have broad support. If it is decided that something will or won't be paid for, that decision will have to stand up to scrutiny from the entire body politic.
Now I will confess that I do not believe we will have a system like the National Health Service of the United Kingdom in my lifetime, because I have only a few decades left, and the political power of the health insurance industry is immense and will be brought fully to bear to prevent such a change. But I am here to tell you that we will never solve all the myriad problems arising from the insanity in our health care financing until we make just this kind of radical change.
To begin with, I will remind readers that I wrote about the issue of coverage for birth control by private health insurance over two years ago, making the argument that the economic case for coverage for women who are not indigent is weak. Women who are sufficiently needy that such coverage makes the difference between having and not having access to birth control are typically covered by Medicaid or eligible for low-cost services from Planned Parenthood.
So we are, as a practical matter, not talking about denying women access to products or services based on someone's religious beliefs. What the Supreme Court said in Burwell v. Hobby Lobby is that, under certain circumstances, a company that is "closely held" (private ownership by a small number of people) cannot be required to pay for coverage for birth control the use of which violates their religious beliefs.
The 5-4 majority opinion in this case relies heavily on the Religious Freedom Restoration Act (RFRA), a 20-year-old federal law intended to preclude substantial burdens upon the free exercise of religion. That law was enacted with a mere three dissenting votes in the Senate and not a single "nay" in the House, which is quite remarkable. Surely the Congress had no idea that two decades later the law would be the basis for such a controversial Supreme Court opinion. We have a national legislative body of 535 people who have ceaselessly behaved as though they've never heard of the Law of Unintended Consequences.
The Supreme Court's decision has been lambasted by progressives in every forum imaginable. This I understand, because many people who vigorously advocate in the realm of reproductive health care have very strong feelings about these issues. And I am inclined to agree with them that the effects of this decision are undesirable. But that doesn't mean the decision is incorrect. Examined from the standpoint of legal reasoning, the majority opinion seems to me to be entirely logical. Difficult though it is to separate one's view of a judicial opinion's effect from the logic of its reasoning, when I force myself to engage in the exercise, that is my conclusion.
The logical short-term solution is obvious from the text of the RFRA itself. A federal statute adopted subsequently can explicitly exclude application of the RFRA to its provisions. Problem solved. Can federal law mandating coverage of contraception now be enacted with such explicit language? That is a political question. And the people who have the power to answer that question are the ones going to the polls this November.
Quite willing to be an opportunist, I am now going to take advantage of the opening afforded by this decision to make the case I have advanced in many postings on social media but have not expounded in this blog.
I don't want Hobby Lobby to pay for birth control. I don't want Hobby Lobby to pay for anything. I don't want Hobby Lobby to provide health insurance coverage to its employees. I don't want any employer in the US to provide health insurance coverage to its employees.
In the United States of America it is time for a Declaration of Independence from the absurdly fragmented non-system of financing health care that has plagued us for decades. We have some people covered by employer-based health insurance. We have some people covered by government programs. We have 15% of the population covered by nothing. When we look at all of the people, no matter which group they fall into, it is arguable that the current arrangement serves no one well. Ask doctors, nurses, and the people who run hospitals if they think the status quo is good for any of them or any of their patients. Ask patients - even the ones on Medicare, which has the highest levels of satisfaction - whether everything is just fine and dandy. The Medicare patients who've been socked with big bills, because the government would pay for observation (meaning outpatient) services instead of a hospital admission, will tell you they are none too pleased.
What's the problem here? All of the payers are trying to get away with paying as little as possible for all services provided. They'd like to pay nothing whenever they can get away with that, and leave doctors, patients, and hospitals holding the bag.
The solution? A system that includes everyone: a National Health Service. Everyone is a stakeholder. All patients have the same status, because everyone is a "covered life" on equal footing. There is no health insurance industry, because there is no health insurance. People who work pay taxes. These public funds are used to finance the health care system. There is only one payer. Decisions about what to pay for and how much to pay are made by agencies of the government composed of stakeholders, heavily weighted toward experts: doctors, health policy wonks, health care consumer representatives.
Do you think when decisions are made that way we will have anyone deciding it makes any sense not to pay for birth control? Of course not! Because that is bad public policy and bad health policy, defying all logic and common sense. Whether it violates anyone's religious precepts won't even be part of the discussion. I can hear some readers asking, "What about the Hyde amendment, prohibiting the use of federal funds to pay for abortions through the Medicaid program?" That is yet another example of why we need to scrap the current crazy-quilt non-system and start over, taking these decisions out of the hands of politicians and giving them to governing boards with representation from all groups of stakeholders. When everyone in the nation is a stakeholder, we won't have anyone making decisions that don't have broad support. If it is decided that something will or won't be paid for, that decision will have to stand up to scrutiny from the entire body politic.
Now I will confess that I do not believe we will have a system like the National Health Service of the United Kingdom in my lifetime, because I have only a few decades left, and the political power of the health insurance industry is immense and will be brought fully to bear to prevent such a change. But I am here to tell you that we will never solve all the myriad problems arising from the insanity in our health care financing until we make just this kind of radical change.
Friday, May 2, 2014
The Last Transition
The phone rang in the emergency department at two o'clock in the morning. The unit clerk answered, spoke briefly, hung up, and turned to tell me a patient upstairs had died, and the nurse needed me to come and pronounce death.
It is an oddity of state law where I practice that, when a patient dies in the hospital, a physician must pronounce death. A nurse can identify the absence of breathing and a heartbeat every bit as well as I can. But nobody asks me about things like this when they periodically revise the Nurse Practice Act.
I am at a community hospital that is part of our health network. My primary position the last three years has been at a big teaching hospital, where there are residents (doctors training in specialties) in the hospital around the clock, and so there is no shortage of physicans to attend to such matters. Here, however, late at night there is only one doctor in the building - the emergency physician - and so all responsibilities requiring a doctor belong to one person.
Sometimes that means attending to emergencies involving inpatients: critically low blood pressure, respiratory distress, prolonged seizures. Sometimes it means seeing patients at the very end of life.
As I walked into the patient's room, I was reminded of one of the very first times I did this. A very recent medical school graduate in my first year of residency training, I was accompanied to that patient's room by an equally newly minted nurse. I entered the room and gazed at the ceiling. The new graduate nurse asked me what I was looking at, and I explained that when a person has just died, if you watch closely, you can see the spirit rise. I was lucky she didn't smack me for that twisted sense of humor, instead briefly thinking I meant it and then saying, "Oh, you're teasing me."
Tonight I placed my stethoscope on the patient's chest and listened for a heartbeat as I watched for the rise and fall of the chest that would be present if she were breathing. No heartbeat. No breathing. The nurse handed me the patient's chart, and I signed the "pronouncement of death" portion of the state form.
My note on the chart said, "Called to see patient who had ceased to breathe. No heartbeat or respirations. Death pronounced." I signed my name.
I looked at the patient's face and thought about whether her expression was peaceful. And I realized I knew nothing about her other than that she was now dead. I could review the chart to see if I could figure out the cause. When I read newspaper obituaries, they rarely say anything about the cause of death, unless it's a famous person. And I often wonder.
But that wasn't what I really wanted to know. I had a much longer list of questions.
What was she like as a child? What were her hopes and dreams? What about as a young woman? Did she fall in love and get married? Did she bear and raise children? Was she a homemaker, a wife and mother? Did she work outside the home, pursue a career? Was she a homebody, or did she travel? What were her aspirations for herself, her husband, her children? Were they fulfilled?
Did she have grandchildren? How many? What was she like as a grandmother? Did she try to make up for all the mistakes she had made as a mother through lack of experience? Did she try to give the benefit of that parenting experience to her children and their partners raising that next generation?
If she had married and had children and grandchildren, what were their memories of her, and which ones were their most cherished? Did she outlive her husband? How badly, and for how long, had she missed him after his passing? If he had survived her, how would her death affect him?
Had she had any regrets? Had she come to terms with them?
Had she ever made a bucket list? How many of the things on that list had she been able to cross off? How many were left? How many were only dreams, things she knew would never be crossed off but she thought still belonged there?
I will never know the answers to those questions, but I will ask them just the same, about every patient I care for who dies, and every one I see for the first time at the very end of life. I will write on the chart, but nothing I write there is what matters.
Every one of us is a collection of answers to such a list of questions. Each time I reflect on them, I think about what the answers will be at the end of my own life. Sometimes I think I should not dwell on my own mortality. But I believe keeping in mind that the journey is finite can sharpen our focus on making the most of it. And I happen to be in a profession in which reminders that the journey is finite are all around me, all the time.
It is an oddity of state law where I practice that, when a patient dies in the hospital, a physician must pronounce death. A nurse can identify the absence of breathing and a heartbeat every bit as well as I can. But nobody asks me about things like this when they periodically revise the Nurse Practice Act.
I am at a community hospital that is part of our health network. My primary position the last three years has been at a big teaching hospital, where there are residents (doctors training in specialties) in the hospital around the clock, and so there is no shortage of physicans to attend to such matters. Here, however, late at night there is only one doctor in the building - the emergency physician - and so all responsibilities requiring a doctor belong to one person.
Sometimes that means attending to emergencies involving inpatients: critically low blood pressure, respiratory distress, prolonged seizures. Sometimes it means seeing patients at the very end of life.
As I walked into the patient's room, I was reminded of one of the very first times I did this. A very recent medical school graduate in my first year of residency training, I was accompanied to that patient's room by an equally newly minted nurse. I entered the room and gazed at the ceiling. The new graduate nurse asked me what I was looking at, and I explained that when a person has just died, if you watch closely, you can see the spirit rise. I was lucky she didn't smack me for that twisted sense of humor, instead briefly thinking I meant it and then saying, "Oh, you're teasing me."
Tonight I placed my stethoscope on the patient's chest and listened for a heartbeat as I watched for the rise and fall of the chest that would be present if she were breathing. No heartbeat. No breathing. The nurse handed me the patient's chart, and I signed the "pronouncement of death" portion of the state form.
My note on the chart said, "Called to see patient who had ceased to breathe. No heartbeat or respirations. Death pronounced." I signed my name.
I looked at the patient's face and thought about whether her expression was peaceful. And I realized I knew nothing about her other than that she was now dead. I could review the chart to see if I could figure out the cause. When I read newspaper obituaries, they rarely say anything about the cause of death, unless it's a famous person. And I often wonder.
But that wasn't what I really wanted to know. I had a much longer list of questions.
What was she like as a child? What were her hopes and dreams? What about as a young woman? Did she fall in love and get married? Did she bear and raise children? Was she a homemaker, a wife and mother? Did she work outside the home, pursue a career? Was she a homebody, or did she travel? What were her aspirations for herself, her husband, her children? Were they fulfilled?
Did she have grandchildren? How many? What was she like as a grandmother? Did she try to make up for all the mistakes she had made as a mother through lack of experience? Did she try to give the benefit of that parenting experience to her children and their partners raising that next generation?
If she had married and had children and grandchildren, what were their memories of her, and which ones were their most cherished? Did she outlive her husband? How badly, and for how long, had she missed him after his passing? If he had survived her, how would her death affect him?
Had she had any regrets? Had she come to terms with them?
Had she ever made a bucket list? How many of the things on that list had she been able to cross off? How many were left? How many were only dreams, things she knew would never be crossed off but she thought still belonged there?
I will never know the answers to those questions, but I will ask them just the same, about every patient I care for who dies, and every one I see for the first time at the very end of life. I will write on the chart, but nothing I write there is what matters.
Every one of us is a collection of answers to such a list of questions. Each time I reflect on them, I think about what the answers will be at the end of my own life. Sometimes I think I should not dwell on my own mortality. But I believe keeping in mind that the journey is finite can sharpen our focus on making the most of it. And I happen to be in a profession in which reminders that the journey is finite are all around me, all the time.
Thursday, March 13, 2014
Do "Stand Your Ground" Laws Make Things Better or Worse?
A recent report on National Public Radio examined this question. Has the enactment of "Stand Your Ground" laws made things worse instead of better?
To review, for those who weren't paying attention when the public spotlight shone on such laws in the aftermath of the killing by George Zimmerman of Trayvon Martin in Florida, such a law says, in essence, that one may use lethal force in self defense when s/he is anywhere s/he has a legal right to be.
Florida's "Stand Your Ground" law wasn't actually relevant to the case, as Zimmerman claimed self-defense as justification for the shooting without reference to the provisions of the law, but that was a legal distinction that was lost on the general public because it was largely ignored in the news reporting.
The NPR report relied heavily on a new study in the Journal of Human Resources by Cheng and Hoekstra from Texas A&M University. These investigators are economists who used the tools of social scientists to examine empirical data, comparing states that adopted new laws of this nature with others that did not, over time.
The first problem with the study, which is not a major flaw, is that it combines two different kinds of laws. The first is the Stand Your Ground law, and the second is the Castle Doctrine. The castle doctrine, simply, is the idea that one has no obligation to retreat from criminal assault in one's own home. The authors assert that the castle doctrine is a feature of English common law (which is historically true) and commonly applied in the US, sometimes in statute, sometimes in case law (which is a bit misleading, because there are some states - New York a prime example - in which the duty to retreat, even from one's own dwelling, remains).
So, the incorporation of the castle doctrine into statute and the enactment of stand your ground laws are combined for purposes of this study. The researchers then looked at the effect of such changes on homicide rates. The paper is long (43 pages) and, as one might expect of a study that incorporates statistical analysis and modeling, rather dense. But it is methodologically rigorous. Like any such study, it necessarily relies upon some assumptions that are subject to question or criticism, but the authors (to their credit) acknowledge this and (in some particulars) show how alternative assumptions affect the results of the analysis.
The central conclusion is that these laws have been associated with an 8% increase in homicides relative to states that did not adopt such laws. To be clear, that does not include killings that were reported to the national database as justifiable homicides, although the investigators note that only a small fraction (probably no more than 20%) of justifiable homicides are reported that way.
Using the 20% assumption, they estimate that about half the difference in killings between adopting and non-adopting states can be accounted for by justifiable homicide. They also acknowledge that if they had used a 10% assumption, which they grant is entirely within the realm of reasonable possibility, all of the additional homicides would then (statistically) be accounted for by the "justifiable" category. As a matter of definition, "justifiable" in this context means that lethal force was used to stop a felony in progress in a situation in which law enforcement or a prosecutor considered it proper for a civilian to do so.
[Oh, about this commonly accepted 20% assumption: we must realize it is based on the work of a criminologist whose methods are also very rigorous but who is perceived by pro-gun-control advocates as supportive of the right of armed self defense. Any time you're looking at work in this field, it is important to know whether the researchers have any biases, either real or perceived.]
So where does that leave us? What does it mean if homicides increased by 8%, which is significant, and half of them were justifiable? Is it bad to have more homicides whether they were justifiable or not? Did the justifiable homicides take some career criminals off the streets, thereby preventing future crimes? What about the half (and remember the underlying assumption) of homicides that may not have been justifiable? Who committed them, and why?
The challenging questions to answer are the ones about the myriad subtle effects of these laws. Do they increase the number of people who choose to own guns for personal protection? The number who carry guns as they go about their daily lives? The willingness of those who keep and bear arms to use lethal force in self defense? Do they lower the psychological threshold for using lethal force? How do they affect the inclination of those who are armed to do everything possible to deescalate conflict, which is the legally and ethically correct thing to do? Do they increase (or decrease) the likelihood that a verbal argument will escalate into a fistfight and then a shooting?
We can all speculate about the answers to these questions, and our answers will be strongly influenced by our own biases. But the answers to these questions are unknown.
It would be interesting to examine the difference in homicide rates following adoption of these laws on a more granular level. How many more homicides were committed by people having a previously clean record, in lawful possession of a gun (with a permit, if not in their own homes), with the killing ultimately found not to be justifiable? Although there is an abundance of data showing that holders of carry permits very rarely use their guns in the commission of crimes, we don't really know the answers to these questions. Zimmerman was acquitted in the killing of Martin, but even a cursory analysis of the incident as reported in the news suggests that there was plenty of opportunity for deescalation and avoidance of the shooting.
My own personal perspective on these laws is simple. If I ever (and I strongly prefer never) am forced to use lethal force in self defense, I want the burden of proof to be on the prosecutor to show that I acted unreasonably. If that cannot be shown according to the standard required under criminal law (reasonable doubt), I want to be shielded from civil liability, where the standard is much lower (preponderance of the evidence). The first part seems to me to be of self-evident necessity to someone who uses lethal force to preserve his own life. The second part may be every bit as important, because in a civil negligence case one has no access to a public defender, and even a successful defense is likely to be financially ruinous.
The broader question remains, I think, unanswered. Do these laws merely protect the intended victim of a crime from being victimized by the legal system, or do they lower people's threshold for using lethal force and make ours a more violent society? Cheng and Hoekstra have given us an interesting look at the data. They conclude that these laws do not deter crime, and they worry that the increase in the homicide rate might be the result of many killings that should not have occurred - and would not have occurred absent a change in the law. They are right to worry about that. We all should. But we cannot yet draw firm conclusions.
To review, for those who weren't paying attention when the public spotlight shone on such laws in the aftermath of the killing by George Zimmerman of Trayvon Martin in Florida, such a law says, in essence, that one may use lethal force in self defense when s/he is anywhere s/he has a legal right to be.
Florida's "Stand Your Ground" law wasn't actually relevant to the case, as Zimmerman claimed self-defense as justification for the shooting without reference to the provisions of the law, but that was a legal distinction that was lost on the general public because it was largely ignored in the news reporting.
The NPR report relied heavily on a new study in the Journal of Human Resources by Cheng and Hoekstra from Texas A&M University. These investigators are economists who used the tools of social scientists to examine empirical data, comparing states that adopted new laws of this nature with others that did not, over time.
The first problem with the study, which is not a major flaw, is that it combines two different kinds of laws. The first is the Stand Your Ground law, and the second is the Castle Doctrine. The castle doctrine, simply, is the idea that one has no obligation to retreat from criminal assault in one's own home. The authors assert that the castle doctrine is a feature of English common law (which is historically true) and commonly applied in the US, sometimes in statute, sometimes in case law (which is a bit misleading, because there are some states - New York a prime example - in which the duty to retreat, even from one's own dwelling, remains).
So, the incorporation of the castle doctrine into statute and the enactment of stand your ground laws are combined for purposes of this study. The researchers then looked at the effect of such changes on homicide rates. The paper is long (43 pages) and, as one might expect of a study that incorporates statistical analysis and modeling, rather dense. But it is methodologically rigorous. Like any such study, it necessarily relies upon some assumptions that are subject to question or criticism, but the authors (to their credit) acknowledge this and (in some particulars) show how alternative assumptions affect the results of the analysis.
The central conclusion is that these laws have been associated with an 8% increase in homicides relative to states that did not adopt such laws. To be clear, that does not include killings that were reported to the national database as justifiable homicides, although the investigators note that only a small fraction (probably no more than 20%) of justifiable homicides are reported that way.
Using the 20% assumption, they estimate that about half the difference in killings between adopting and non-adopting states can be accounted for by justifiable homicide. They also acknowledge that if they had used a 10% assumption, which they grant is entirely within the realm of reasonable possibility, all of the additional homicides would then (statistically) be accounted for by the "justifiable" category. As a matter of definition, "justifiable" in this context means that lethal force was used to stop a felony in progress in a situation in which law enforcement or a prosecutor considered it proper for a civilian to do so.
[Oh, about this commonly accepted 20% assumption: we must realize it is based on the work of a criminologist whose methods are also very rigorous but who is perceived by pro-gun-control advocates as supportive of the right of armed self defense. Any time you're looking at work in this field, it is important to know whether the researchers have any biases, either real or perceived.]
So where does that leave us? What does it mean if homicides increased by 8%, which is significant, and half of them were justifiable? Is it bad to have more homicides whether they were justifiable or not? Did the justifiable homicides take some career criminals off the streets, thereby preventing future crimes? What about the half (and remember the underlying assumption) of homicides that may not have been justifiable? Who committed them, and why?
The challenging questions to answer are the ones about the myriad subtle effects of these laws. Do they increase the number of people who choose to own guns for personal protection? The number who carry guns as they go about their daily lives? The willingness of those who keep and bear arms to use lethal force in self defense? Do they lower the psychological threshold for using lethal force? How do they affect the inclination of those who are armed to do everything possible to deescalate conflict, which is the legally and ethically correct thing to do? Do they increase (or decrease) the likelihood that a verbal argument will escalate into a fistfight and then a shooting?
We can all speculate about the answers to these questions, and our answers will be strongly influenced by our own biases. But the answers to these questions are unknown.
It would be interesting to examine the difference in homicide rates following adoption of these laws on a more granular level. How many more homicides were committed by people having a previously clean record, in lawful possession of a gun (with a permit, if not in their own homes), with the killing ultimately found not to be justifiable? Although there is an abundance of data showing that holders of carry permits very rarely use their guns in the commission of crimes, we don't really know the answers to these questions. Zimmerman was acquitted in the killing of Martin, but even a cursory analysis of the incident as reported in the news suggests that there was plenty of opportunity for deescalation and avoidance of the shooting.
My own personal perspective on these laws is simple. If I ever (and I strongly prefer never) am forced to use lethal force in self defense, I want the burden of proof to be on the prosecutor to show that I acted unreasonably. If that cannot be shown according to the standard required under criminal law (reasonable doubt), I want to be shielded from civil liability, where the standard is much lower (preponderance of the evidence). The first part seems to me to be of self-evident necessity to someone who uses lethal force to preserve his own life. The second part may be every bit as important, because in a civil negligence case one has no access to a public defender, and even a successful defense is likely to be financially ruinous.
The broader question remains, I think, unanswered. Do these laws merely protect the intended victim of a crime from being victimized by the legal system, or do they lower people's threshold for using lethal force and make ours a more violent society? Cheng and Hoekstra have given us an interesting look at the data. They conclude that these laws do not deter crime, and they worry that the increase in the homicide rate might be the result of many killings that should not have occurred - and would not have occurred absent a change in the law. They are right to worry about that. We all should. But we cannot yet draw firm conclusions.
Thursday, February 13, 2014
Undo the Flu
You cannot be serious, I thought.
[Flashback to the early 1980s. Professional tennis player John McEnroe was my favorite, not because of his whiny, bad boy personality but because he was a magician at the net. I see him standing on the court, hands on hips, one of them holding a tennis racquet, staring in disbelief at an official who has just made a call with which McEnroe plainly disagrees. "You cannot be serious!" McEnroe yells. This phrase later became the title of his autobiography.]
"Undo the Flu." You cannot be serious. This is wrong, on so many levels.
First, one cannot "undo the flu." One can treat the symptoms: fever, sore throat, cough, headache, muscle aches. This constellation of symptoms comprises what we call an "influenza-like illness." At the peak of flu season, the statistical likelihood that such an illness is caused by the influenza virus, as opposed to one of several other viruses that can cause the same syndrome, is about 50%. Why does this matter? We have anti-viral drugs that have activity against the influenza virus but not the others. So, if you get "the flu," there's only a 50-50 chance it's influenza and a drug that works against the virus might help.
There is a test (by swabbing the inside of your nose) that is pretty fair at distinguishing whether it really is the influenza virus. If it's positive, the doctor might prescribe an anti-viral drug. If you watch TV, you've probably seen the commercial for the most popular one, called oseltamivir, marketed under the trade name Tamiflu. When it first came out, I joked that I would prescribe it only for patients named Tami. (I'd be flexible on the spelling, so Tammy could have a prescription, too.)
Why was I unimpressed with it? For the typical patient with influenza, if the drug is started within 48 hours of onset of illness, it shortens the duration by about a day. Beyond 48 hours, it probably makes little or no difference. There are some patients for whom the drug is clearly recommended: patients sick enough with influenza to require hospitalization, for example. But for most people it will make a very modest difference, and only if it's actually the influenza virus, and only if it's started very early on.
It does not "undo the flu." If you really want to undo the flu as a public health problem, get on the bandwagon of advocacy for widespread vaccination. The vaccine is not perfect, because it is not 100% effective, and (like any medical intervention) it can have side effects, and it doesn't protect against those other viruses. But if undoing the flu is your goal, prepare your immune system to fight it off before you get it.
Is there anything else you could accomplish by seeing a doctor for an influenza-like illness? The symptoms can all be treated with medicine you can buy in a drugstore without a prescription. If you're not sure what to buy, because you don't watch TV commercials, just ask the pharmacist. What can a doctor prescribe that's better? Well, there is one thing. You can get a prescription for a narcotic.
For many centuries, human beings have known the effects of the opium poppy. The active substance derived from that plant is morphine, and we've made numerous modifications to the morphine molecule. Some of them are used as pain relievers and cough suppressants. They also elevate mood for most people, which is what makes them potentially addictive. So if you want to feel better by suppressing cough, relieving pain, and improving your mood, you could hope the doctor will prescribe a narcotic. But don't count on that, because the heavy emphasis on how prescription narcotics are turning us into a nation of addicts, and killing us in droves through overdoses, has made many doctors skittish about prescribing them for anyone who doesn't have cancer pain or a broken bone.
By now it should be obvious that I'm saying influenza is - for most people - an acute, self-limited illness. Translation from medical jargon: it comes and it goes, and nothing much makes any difference in the natural history of the illness. Medicines may ameliorate the symptoms, and most of them can be had without a prescription. Nothing makes the illness go away faster, with the very modest exception of oseltamivir, prescribed under just the right circumstances, as I've explained.
So the "Undo the Flu" billboard is encouraging you to consult a health care professional for an illness which, most likely, will not benefit from professional health care.
I am now picturing the look on the face of Joseph Califano if he were reading this billboard. Califano was Secretary of Health, Education & Welfare (HEW) in the Carter Administration. Califano believed that in medicine supply generates demand. He was the architect of US health policy that led to dramatic slowing of the growth in the supply of doctors. As a result, that supply has not kept pace with demand, and today everyone agrees there is a shortage of doctors, especially in primary care, and for the next few decades there won't be enough of them to meet the medical needs of an aging population.
But the proliferation of urgent care centers, and the marketing of their services, is a spectacular proof of Califano's thesis about supply generating demand, and nothing captures the phenomenon better than a billboard encouraging people to seek professional health care of very modest (if any) benefit.
Urgent care facilities certainly have a niche to fill. Plenty of patients do need to see a doctor (or a nurse practitioner) for episodic care, and not everyone has a primary care doctor who can offer a timely or convenient appointment. But we have developed a habit, in the US and many other affluent nations, of seeking professional health care for every minor illness and every trivial injury. That is a costly habit. I don't know the size of its contribution to our ever-increasing national health care budget, but I think it is significant. And I think we should not be fostering it.
A few years ago I had the privilege of caring for a professional bull rider who'd been thrown and stomped. He had serious injuries to internal organs in the chest and abdomen. Over his protests, his buddies put him in the back of a pickup truck and brought him to the hospital. When I explained my findings and told him he needed to be under the care of a trauma surgeon, he told me he really thought he'd be fine. He thought he could just walk it off.
Of course he was wrong, but an awful lot of us see doctors when we really could just walk it off, because seeing a doctor won't make any real difference. Maybe we all need a little bit more of the cowboy in us.
Wednesday, January 15, 2014
Responsible Gun Ownership
Two recent stories featured by media outlets have gotten me thinking about responsible gun ownership. Both involved people who appeared to be law-abiding, rational citizens of the sort who can be trusted with concealed carry of a handgun.
The first incident was one of an "accidental discharge" of a semiautomatic pistol by a woman who is a member of the Kentucky state legislature (Rep. Leslie Combs). News reports indicated she was unloading the pistol in her office, although the reason for this was not entirely clear. She said something about having decided she did not wish to use the gun any more, according to a few news accounts, although most reports gave no reason at all for her actions. This doesn't really make any sense. If a member of the legislature with a concealed carry permit decided that she no longer wished to go about armed with a loaded handgun, why would she suddenly decide to unload it in her office?
Setting aside that question, the incident is illustrative of a few important points. First is that one must be thoroughly familiar with the operations of any handgun one carries or otherwise possesses. She clearly had a lapse: apparently she did not realize the gun had a round in the chamber when she pulled the trigger. Presumably she made the error of thinking it was unloaded after removing the magazine. This error is sufficiently common that some pistols are designed with a "magazine safety." This makes it impossible to fire a round that is in the chamber if there is no magazine in place in the gun's receiver.
[In case you're wondering, there is a reason some pistols do not have a magazine safety. Although such a safety makes a pistol somewhat more idiot-resistant, it also makes it impossible to fire the round in the chamber if one is in the midst of changing magazines or has somehow accidentally pressed the button that ejects the magazine. In a life-threatening situation, being unable to fire the round in the chamber because there is no magazine in the receiver could have tragic consequences. On the other hand, making a pistol more idiot-resistant has great appeal. When I teach people about pistols, I tell them they must be aware of how a magazine safety operates, decide whether they want the pistol they are going to keep or carry to have such a safety or not, and know whether any pistol they possess is so designed. I also teach them how to find out, if they aren't sure, by having a round in the chamber, removing the magazine, pointing the gun downrange, and squeezing the trigger.]
So my guess is that Rep. Combs had removed the magazine and had not checked for a round in the chamber. Then, when she pulled the trigger, she learned two things: there was a round in the chamber, and her pistol did not have a magazine safety. Such foolishness on the part of a gun owner is most unfortunate and proved quite embarrassing, given that the incident occurred in her office. Did she do anything right? Absolutely. She obeyed the first and most important rule of gun safety: she had the gun pointed in a safe direction. (That rule is commonly stated either of two ways. Always have the gun pointed in a safe direction. Never point a gun at anything you do not intend to destroy. Notice the inherent assumption that any gun is loaded, no matter how sure you are that it isn't. The bullet struck the base of a bookcase.)
Why did this happen? Inadequate training? Mental lapse? We cannot tell from the news reports. Despite the statement that she had decided she no longer wished to have a loaded gun in her possession, for unstated reasons, she remains a staunch supporter of the right to keep and bear arms.
The other news story was much more disturbing.
A retired police office got into an argument with another patron of a movie theater, who was texting on a phone. The movie hadn't started yet. The officer went to report the other patron to cinema personnel, who reportedly took no action. The texting patron was nevertheless annoyed about having been reported. The argument escalated. The texting patron reportedly threw popcorn at the officer. There may be missing details in the reports of the sequence of events, but the accounts I've read make no mention of any blows being struck before the officer drew a pistol and shot the other patron and that man's wife. The man who was texting died, and his wife was wounded and taken to hospital.
At the cinema near my home, where I go to see a movie on infrequent occasions, there is a very emphatic announcement, which goes on at some length, about how the use of a phone for talking, texting, web browsing, etc. during the movie will not be tolerated and will result in the removal of the patron. That seems reasonable to me. Confronting someone who is texting before the movie has begun does not. If I didn't know there was going to be an announcement, I might say to my fellow moviegoer, in the most congenial manner I could summon, that I hoped he wouldn't do that during the movie. But probably not. And if his reaction was unfriendly, I'd probably just move.
I've read a great deal about the decision to carry a concealed handgun, and one of the recurring themes is that this places upon the person carrying the gun a great burden of responsibility to avoid conflict, and to do everything possible to deescalate any conflict that occurs. A verbal argument that has the potential to escalate into a fistfight becomes a much more serious undertaking if it has the potential to escalate into a shooting.
The striking thing about this story is that the shooter was a retired police officer. If anyone has been trained to use, in any sort of interpersonal conflict, all manner of behaviors short of physical force, and all manner of physical force short of lethal force, it is a police officer. Therefore, the shooter's background is a compelling reminder that training is no guarantee that a person will adhere to important principles of nonviolent conflict resolution. Such education and training is a good thing, and not only for people who have guns. In my work as an emergency physician, I see people every day who could have benefited from it and perhaps avoided injuries from punches, kicks, impact weapons, and sharp implements.
As I'm sure you would guess, these stories have generated many comments on news websites and in social media. Those who dislike guns cite these incidents as evidence that human beings simply cannot be trusted to behave carefully and rationally, even when they appear to be the sort who could be relied upon to do just that. The conclusion they draw is that people just should not be allowed to go about armed.
As you might guess if you are a regular reader, that is not the conclusion I reach.
This is because I believe in the right of self defense as a fundamental, natural human right, and I believe we live in a society in which, for some of us, the exercise of that right may require the availability of lethal force in the form of a gun. Without that, the elderly, the frail, the weak, and the slow will always be at the mercy of those who are malevolent and who are young, strong, and quick, even if they are not themselves armed.
So the conclusions I draw are straightforward. All who possess firearms have an obligation to be well trained, to practice frequently, and to learn thoroughly and internalize profoundly the legal and ethical principles governing the use of lethal force in self defense.
The first incident was one of an "accidental discharge" of a semiautomatic pistol by a woman who is a member of the Kentucky state legislature (Rep. Leslie Combs). News reports indicated she was unloading the pistol in her office, although the reason for this was not entirely clear. She said something about having decided she did not wish to use the gun any more, according to a few news accounts, although most reports gave no reason at all for her actions. This doesn't really make any sense. If a member of the legislature with a concealed carry permit decided that she no longer wished to go about armed with a loaded handgun, why would she suddenly decide to unload it in her office?
Setting aside that question, the incident is illustrative of a few important points. First is that one must be thoroughly familiar with the operations of any handgun one carries or otherwise possesses. She clearly had a lapse: apparently she did not realize the gun had a round in the chamber when she pulled the trigger. Presumably she made the error of thinking it was unloaded after removing the magazine. This error is sufficiently common that some pistols are designed with a "magazine safety." This makes it impossible to fire a round that is in the chamber if there is no magazine in place in the gun's receiver.
[In case you're wondering, there is a reason some pistols do not have a magazine safety. Although such a safety makes a pistol somewhat more idiot-resistant, it also makes it impossible to fire the round in the chamber if one is in the midst of changing magazines or has somehow accidentally pressed the button that ejects the magazine. In a life-threatening situation, being unable to fire the round in the chamber because there is no magazine in the receiver could have tragic consequences. On the other hand, making a pistol more idiot-resistant has great appeal. When I teach people about pistols, I tell them they must be aware of how a magazine safety operates, decide whether they want the pistol they are going to keep or carry to have such a safety or not, and know whether any pistol they possess is so designed. I also teach them how to find out, if they aren't sure, by having a round in the chamber, removing the magazine, pointing the gun downrange, and squeezing the trigger.]
So my guess is that Rep. Combs had removed the magazine and had not checked for a round in the chamber. Then, when she pulled the trigger, she learned two things: there was a round in the chamber, and her pistol did not have a magazine safety. Such foolishness on the part of a gun owner is most unfortunate and proved quite embarrassing, given that the incident occurred in her office. Did she do anything right? Absolutely. She obeyed the first and most important rule of gun safety: she had the gun pointed in a safe direction. (That rule is commonly stated either of two ways. Always have the gun pointed in a safe direction. Never point a gun at anything you do not intend to destroy. Notice the inherent assumption that any gun is loaded, no matter how sure you are that it isn't. The bullet struck the base of a bookcase.)
Why did this happen? Inadequate training? Mental lapse? We cannot tell from the news reports. Despite the statement that she had decided she no longer wished to have a loaded gun in her possession, for unstated reasons, she remains a staunch supporter of the right to keep and bear arms.
The other news story was much more disturbing.
A retired police office got into an argument with another patron of a movie theater, who was texting on a phone. The movie hadn't started yet. The officer went to report the other patron to cinema personnel, who reportedly took no action. The texting patron was nevertheless annoyed about having been reported. The argument escalated. The texting patron reportedly threw popcorn at the officer. There may be missing details in the reports of the sequence of events, but the accounts I've read make no mention of any blows being struck before the officer drew a pistol and shot the other patron and that man's wife. The man who was texting died, and his wife was wounded and taken to hospital.
At the cinema near my home, where I go to see a movie on infrequent occasions, there is a very emphatic announcement, which goes on at some length, about how the use of a phone for talking, texting, web browsing, etc. during the movie will not be tolerated and will result in the removal of the patron. That seems reasonable to me. Confronting someone who is texting before the movie has begun does not. If I didn't know there was going to be an announcement, I might say to my fellow moviegoer, in the most congenial manner I could summon, that I hoped he wouldn't do that during the movie. But probably not. And if his reaction was unfriendly, I'd probably just move.
I've read a great deal about the decision to carry a concealed handgun, and one of the recurring themes is that this places upon the person carrying the gun a great burden of responsibility to avoid conflict, and to do everything possible to deescalate any conflict that occurs. A verbal argument that has the potential to escalate into a fistfight becomes a much more serious undertaking if it has the potential to escalate into a shooting.
The striking thing about this story is that the shooter was a retired police officer. If anyone has been trained to use, in any sort of interpersonal conflict, all manner of behaviors short of physical force, and all manner of physical force short of lethal force, it is a police officer. Therefore, the shooter's background is a compelling reminder that training is no guarantee that a person will adhere to important principles of nonviolent conflict resolution. Such education and training is a good thing, and not only for people who have guns. In my work as an emergency physician, I see people every day who could have benefited from it and perhaps avoided injuries from punches, kicks, impact weapons, and sharp implements.
As I'm sure you would guess, these stories have generated many comments on news websites and in social media. Those who dislike guns cite these incidents as evidence that human beings simply cannot be trusted to behave carefully and rationally, even when they appear to be the sort who could be relied upon to do just that. The conclusion they draw is that people just should not be allowed to go about armed.
As you might guess if you are a regular reader, that is not the conclusion I reach.
This is because I believe in the right of self defense as a fundamental, natural human right, and I believe we live in a society in which, for some of us, the exercise of that right may require the availability of lethal force in the form of a gun. Without that, the elderly, the frail, the weak, and the slow will always be at the mercy of those who are malevolent and who are young, strong, and quick, even if they are not themselves armed.
So the conclusions I draw are straightforward. All who possess firearms have an obligation to be well trained, to practice frequently, and to learn thoroughly and internalize profoundly the legal and ethical principles governing the use of lethal force in self defense.
Subscribe to:
Posts (Atom)