Saturday, May 28, 2011

An Ounce of Prevention is Worth ... Well, That Depends

When Benjamin Franklin said "An ounce of prevention is worth a pound of cure," he wasn't talking about taking Lipitor to keep from having a heart attack.  In fact, he wasn't talking about health at all.  No, he was interested in keeping people from burning their homes down by being careless with hot coals in dwellings with wood floors.  So who is following in Franklin's footsteps?  It isn't the American Cancer Society when they tell you to sign up for the joy of colonoscopy at age 50. No, it's Smokey the Bear.

There are many things we hear all the time in discussions of ways to promote good health that drive me bonkers because I think we should be precise in our use of language.  So, for example, it irks me every time I hear about mammograms and "prevention" of breast cancer.  That tells me people don't understand the difference between prevention and early detection.

But I digress, so let's focus.  What do we know about prevention?  Or maybe the better question is what do we think we know about prevention?  We have some pretty fair ideas about healthful diet and what we should eat to lower our risk of developing cardiovascular disease and some forms of cancer.  There is reasonably good evidence, for example, that trans fats are associated with (and probably contribute directly to) arteriosclerotic cardiovascular disease and colon cancer.  Diets lower in saturated fats and refined sugars are good.  Regular exercise has an abundance of scientific evidence to support the belief that it reduces the risk of heart attack and stroke, and probably dementia, too.  The scientific literature on the relationships among diet, exercise, prevention of disease, and promotion of health is vast and fascinating.  I often quibble over whether the evidence is merely circumstantial and demonstrates an association without proving cause and effect, but a big pile of circumstantial evidence may be enough reason to change behaviors.  And some of the evidence is based on more rigorous studies that really do show causality.

I am more skeptical when it comes to medical interventions to prevent disease and promote health.  Really, you may be asking?  A doctor expressing skepticism about the value of medicine?  As one controversial political figure likes to say, You Betcha!

You see, for every medical intervention there are risks and benefits, and one must always know them, weigh them, and find the right balance.  So, while there are studies that show some people can lower their risk of premature death from heart attack by taking a statin (such as Lipitor), these drugs are not without side effects, occasionally serious ones.  And the risk-benefit balance is not the same for every patient.  People at high risk for an adverse health outcome are typically more likely to benefit from an intervention, while those at lower risk may be more likely to be harmed by the intervention.

There is a long litany of issues in prevention of disease and promotion of health about which it is easy to find differences of opinion and swirling controversy. These include treatment of high blood pressure (specifically treating milder elevations), the use of drugs to lower cholesterol, tight control of blood sugars in diabetics, and the use of screening tests for early detection of prostate and breast cancers, just to name a few.

Give me enough free time to research the details on which I may be a bit fuzzy, and I could write a book about this stuff.  But that won't happen, because I'll never find an agent good enough to persuade a publisher to give me an advance that would cover my living expenses, so I must go on practicing medicine to support myself.  But that's OK, because someone else has already written the book. Actually, there are more than just one, but earlier this year one was published with the title Overdiagnosed: Making People Sick in the Pursuit of Health (first author H. Gilbert Welch).  Dr. Welch makes a compelling case that we can do a lot of harm by looking for "abnormalities" in our anatomy and physiology that may not be "disease."

The best book in this area by a non-physician was published in 2007 by award-winning journalist Shannon Brownlee:  Overtreated: Why Too Much Medicine is Making Us Sicker and Poorer.  Brownlee's work is especially illuminating if you are interested, as I always am, in how so many of the things that go on in our health care system seem to be driven more by the profit motive than by what is best for patients.

What does all this mean for you?  When a doctor recommends an intervention intended to prevent disease, detect it at an earlier, more curable stage, or in any other way promote better health, ask about the downside.  What is known about risks relative to potential benefits?  And not just for populations of patients, but for you as an individual, because the risk-benefit analysis is different for every person.  If your doctor doesn't seem to know how to answer that question - or, worse yet, is dismissive - do your own research or find yourself a better doctor.

Monday, May 23, 2011

Expensive Health Care, Part II of infinity

This week I am one of about 500 emergency physicians in Washington, DC for a meeting focused on political advocacy.  The major topic of discussion is health care reform and its implementation.  Everyone is interested in controlling costs. Policy makers claim a duality of interests: controlling costs and improving quality. They insist these two goals are not mutually exclusive and that there is reason to think certain measures to improve quality can simultaneously reduce costs.

One of the recurrent themes in discussions of health care is that the cost of health care has, for quite some time, been rising faster than the overall rate of inflation.  This is certainly true.  But it is a false comparison. When the Bureau of Labor Statistics reports the Consumer Price Index, it is based on a "basket" of goods and services.  Most of the things in that basket don't change much over time.  The loaf of bread in that basket is very much like the loaf of bread that was in that basket 30 or 40 years ago.  (Of course it is much fresher than the old one.)

Is today's health care the same as what was available 30 or 40 years ago?  Yes, that is a rhetorical question. Advances in medicine have been such that the health care dollar is buying something very different now from what it bought a generation ago.  That makes comparisons difficult - especially the kinds of comparisons necessary to examine what something costs relative to what it cost in the past.

This is a crucial element in the consideration of what health care should cost, or of what we are willing to pay for it.  We are spending a higher percentage of our gross domestic product on health care now than ever before.  But we must ask not whether the rate of increase relative to the overall rate of inflation is too high, or whether the percentage of the GDP is too high, but whether what we're getting for the health care dollar is worth it.  That is a very different question.  Perhaps your answer to that question is no, leading you to the same conclusion - that we are spending too much on health care - but before we reach any conclusions, it is important to make sure we are asking the right questions.

Economists note that we spend a higher percentage of our GDP on health care than any other country in the developed world, which may be a better way to make comparisons, and yet it is unclear whether we are achieving better health outcomes than those other countries.  If you like that line of reasoning - and I do - it is still important to understand that the logical conclusion is probably not that we are spending too much but that we are not spending wisely.

During the debate over health care reform legislation there was much talk about comparative effectiveness research.  This research - intended to yield conclusions about what is useful and what isn't - is not something new, although we certainly need much more of it.  What must be new, however, is an enlarged public understanding of what we know and how we should make use of it.

The Public Health Service produced a set of guidelines years ago about the management of back pain.  Among the PHS recommendations, based on a solid foundation of evidence, was one that suggested no imaging studies (x-rays or others) be obtained unless the symptoms have been going on for at least a month and not improving with "conservative therapy."  (What we're talking about here is not back pain after a high-speed motor vehicle crash in which there is reason to be concerned about a spinal fracture or a spinal cord injury, but rather run-of-the-mill back pain such as most adults experience at one time or another as one of those annoying facts of life in a human body.)

But what is the reality of medical care in our system?  Many patients with back pain consult a physician and wind up with an order, very early on (long before a month's trial of conservative therapy), for an MRI of the spine.  There are many reasons for this, but the most important, I think, is that patients are not interested in taking a conservative approach that is recommended based on good scientific evidence.  They want to know exactly what is wrong, and right away. Physicians, eager to meet patients' expectations in a health care system in which the doctor-patient relationship is frequently characterized by consumerism and a view of the patient as a "customer," do what they think the patient wants.

We must reconsider our view of what it means to consult a physician.  When we approach the doctor-patient encounter with expectations of specific things the doctor should do, most often that's what we'll get.  Very often, we'll get more rational test ordering, a more scientific approach to prescribing treatment, and a better outcome if we tell the physician the problem and work with him or her to find the solution without preconceived expectations of how to get there.  The doctor has gone through eleven to sixteen years of education and training after high school to acquire the expertise to figure out what is wrong with you and what to do about it.  So go to see the doctor with an open mind, prepared to take advantage of all that learning.

Thursday, May 19, 2011

If you think health care is expensive now....

Just wait until it's free!

This statement has now earned the distinction of being a time-worn adage.  And we are hearing it more and more frequently as social critics and public policy experts worry about what will happen to demand for health care services when 30-some million people are added to the ranks of those with some kind of health insurance under the Patient Protection and Affordable Care Act (PPACA), also known as Obamacare.

People with health insurance coverage tend to seek and consume more health care than those without.  The more striking comparison, however, becomes significant when we consider that many of the 30-plus million newly insured will achieve that status through expansion of Medicaid, the public health insurance program for the poor.  Utilization of health care services by Medicaid recipients is typically greater than that of those with private insurance.

One perspective on this difference is based on a simple economic principle: when something is "free," people tend to use more of it, even to waste it.  Health care services for Medicaid recipients are not, of course, free, because they are paid for (albeit at a rate that does not cover costs) with public dollars.  (Note here that the acceptable terms include "public funds" and "taxpayer dollars."  "Federal revenues" may also pass muster.  But do not use the term "government funds" or anything like it.  There are no "government funds."  The government has no funds but those it takes directly from taxpayers.  Any time you hear about "government money," remember that's your money and mine.)

But while no health care services are free in the big picture, Medicaid recipients typically have no (or minimal) out-of-pocket expenditures.  So, as this analysis goes, when people's decisions about consuming health care services are unattended by financial responsibility, there is no part of the decision-making process that weighs whether the desired services are really needed - an aspect of economic decision making that occurs every time we have to pay for something.

Those of us in the health care industry who are engaged in the business of providing health care, some of whose patients are Medicaid recipients, know that there is much more to it than a lack of financial responsibility.  It is easy to say that if we just imposed a co-pay, they would stop running to see doctors for every ailment, no matter how trivial.  But it is really not that simple.

No, of course it isn't, I hear the cynics saying.  And then they go on to cite another simple economic principle: the value of time.  You see, there are only 24 hours in a day, and we have to decide what to do with them.  How many times have you heard folks say they have not consulted a physician about something because they don't have time?  They are simply too busy with life's many obligations, most notably family and work.  So those who are stuck on simple economics point out - and this point is not without merit - that many (certainly not all) Medicaid recipients are unemployed, which means they have a lot more time to be consumers of health care than if they had to work for a living.

These two economic principles are no doubt important.  But there is another factor operating in this complex equation that I think may be just as significant: education.  In our society there is a strong correlation between level of education and income.  So it comes as no surprise that the poor, in general, and Medicaid recipients, in particular, are less educated than the general population.  Tied to that lack of education is a relative lack of understanding about matters of health, illness, and injury.  The more you know about health and illness, the better prepared you are to make sensible decisions about when you can take care of your own minor ailments and when you need professional help.

There are some who say they have the answer to this looming problem: Just Say No.  In other words, scrap the plan to add tens of millions to the Medicaid rolls. But while there is an economic argument to be made on either side of this question, there is also one of social justice.  I will save that for a future entry in this blog.

The policy wonks are surely correct in predicting a large increase in health care utilization when we add 30-some million to the ranks of the insured in the manner that is planned.  Just as surely, there are numerous approaches that might be effective in keeping the increase in consumption from becoming a runaway train. I believe the single approach with the greatest potential is education.  The newly insured must be given informational resources, targeted to their level of knowledge and sophistication, that will help them make rational and prudent decisions about seeking health care.  And the system must be designed to assure they take full advantage of those informational resources.  Only then can we assure that the currently uninsured are given access not to more health care, but to the right health care.  

Saturday, May 14, 2011

Do You Moth? Then This Is Your Day!

Leo Rosten wrote a delightful book titled The Joys of Yiddish (1968), and it has inspired me to find joys in other languages, especially English.  English is full of word play, largely because it is rich in irregular word forms.  So, while a worker is someone who works, a farmer is someone who farms, and (at least in a way) a plumber can be said to plumb, a mother really does not moth.  A good mother, however, does help her caterpillars turn into the most beautiful butterflies.

Last week I was looking for a card for Mother's Day.  They are typically divided into categories.  I was looking for the one labeled "Wife."  You can buy a card for your mother, your grandmother, your mother-in-law, your adult daughter who is now a mother, or your wife.  (There are others, but these are the main categories.)  My grandmothers have long since passed on, as have my mother and mother-in-law, and my adult daughter is doing things in traditional order, which means she is going to get married first (soon!) and become a mother later on.

The "Wife" section is pretty big.  I was looking for just the right card, which means I looked at all of them.  The right card is one that's pretty and has just the right sentiments, elegantly expressed.  I can express my own sentiments, but greeting cards are big business because so many of us think someone else can do it better, and they are conventional (and did I mention big business?), so I go along. Besides, I like a challenge.  If I wrote the sentiments myself, the challenge would be to write and revise until the words were perfect.  In the store, the challenge is to find a card written by someone else that says just what I want to say.  I was going to start that last sentence "In the card store," but nowadays there are so many stores trying to be all things to all people, I think most greeting cards are not bought at card stores any more.

Sometimes the selection of cards makes finding one that says just what I want to say more challenging than it should be, and then it's like taking a multiple choice test.  You know how that goes: you look at all the answer choices, and there is something wrong with each of them, but there isn't any choice that says, "None of the above."  That would be like leaving the store without a card, which is an option only if you have the time - and the inclination to enrich OPEC - to go from store to store.  So you pick the "best" answer.

The "Wife" cards all seemed to have been written by people who completely missed the point.  They were all focused on the marriage.  I am perfectly happy to buy cards for my wife that say what a wonderful marriage partner she has been, and I do that for her birthday, our wedding anniversary, and Valentine's Day.  I felt like the editor who had to tell the reporter she had completely missed what the story was really about.  (I am an editor, so this put me in my comfort zone, but I could not send the greeting card writers back to start over, because I am not their editor.)

Should a Mother's Day card for one's wife tell her what a wonderful wife she is? Sure.  But the main thing should be what a wonderful mother she is.  These cards were all written by folks who forgot the name of the Day.  Or maybe they just figured that if a man is buying a card for a woman, he is doing it to tell her how much he appreciates her in relation to himself, because men have a very egocentric way of looking at the world and everyone else in it.  Now I was starting to get a little offended, so I took a deep breath and refocused.

I did not expect to find a card that expressed my thoughts about a woman who was willing to take an extended leave from a very successful professional career to raise two daughters and then re-enter the work force to set an example for them of what a professional woman can do in balancing career and family.  I did not expect it to say that those two daughters are extraordinary young women, each of whom will make the world a better place in her own way, and that they turned out that way mostly because of their mother.  No, I realize specificity is always lacking in greeting cards.

But the right card should at least say what my thoughts are on Mother's Day: that she is a wonderful mother, and that I feel privileged to have been chosen to be the father of her children and to help her raise them to be the fine young ladies they are.  Was that too much to ask?

Apparently it was.  So I found one that at least said something about being a mother, and I wrote my own card to go inside that card.

Hallmark and American Greetings, and the rest of you: listen up!  Next year I expect you to do better.  I'll be paying close attention.  And I am encouraging all other husbands and fathers to do the same, and to remember that this is a day to tell your wife that the splendid job she has done being a mother is the reason for a big part of your love for her.

Tuesday, May 10, 2011

Tylenol Will Give You Cancer!

Or so said a story run by Reuters Health, a respected news service, on Monday (May 9, 2011).  Reuters was reporting on a study published in the Journal of Clinical Oncology.  To be fair, the Reuters article did explain that the association found by epidemiologists didn't exactly prove cause and effect, but that inconvenient fact didn't stand in the way of the headline writer: "Acetaminophen Tied to Blood Cancers."

If you think you can endure an explanation of the difference between an association and a cause-and-effect relationship, bear with me.  If not, there is surely a basketball or hockey playoff game you could watch.

Here's what epidemiologists do.  They take a big group of people and ask them about their habits - in this case, the relevant habit is their use of pain relievers. And they divide them into subsets that are distinct from each other with respect to what they report about this particular habit.  So, one subset uses acetaminophen more than a certain amount, the other less.  They then compare the two subsets to see whether they are different in the proportion who are afflicted with certain diseases.  They try to match the subsets to assure that they are similar to each other in other characteristics that might affect their likelihood of getting these diseases.  To the extent that they are not perfectly matched, they perform statistical manipulations to adjust for those differences.

It is important to understand that epidemiologists are truly expert in collecting and analyzing this sort of information and in making the statistical adjustments that enable them to discern potentially meaningful associations.  So are there pitfalls that even their expertise cannot eliminate?  By now you can predict the answer.

First, when they try to match the groups being compared, they have to think of all potential "confounding variables."  Now, there's a problem.  Since they don't really know what all the factors are that might influence a person's risk of developing a "blood cancer," they cannot properly match the two groups, can they?  Nor can they adjust for whatever differences there may be between the two subsets in variables they haven't even identified.  And, of course, all this assumes that what people report about the habit researchers are scrutinizing is accurate.

If we really wanted to know whether acetaminophen has an important effect on your risk of getting a disease, we would recruit a very large group of people - say ten thousand or so - who would agree to be randomized to one of two groups. One group would be instructed to take acetaminophen whenever they wanted, for whatever they wanted, and keep a record of their use of that medicine.  The other group would be instructed never to use acetaminophen at all.  We would hope that everyone in the two groups would follow their instructions and that they would stay in touch with us so we could keep tabs on their state of health and see if they developed diseases at different rates over the next ten or twenty years. Even if you've never given a thought to such medical research, it is surely obvious to you that doing such a study would take a lot of work - and a very long time.

But that's what it takes to demonstrate an association that, in all probability, truly represents a cause-and-effect relationship.

We could look at a population of subjects and ask them about whether they ride in pick-up trucks and whether they watch rodeo events on ESPN2 or Versus.  I'm guessing there would be an association.  And I think you know that doesn't prove riding in pick-up trucks causes people to watch rodeo events on TV.

I hope the people who write for Reuters Health know this, too.  I really think they do.  But why let that stand in the way of an eye-catching headline?

Saturday, May 7, 2011

Journal Club: Don't Believe Everything You Read

We all know - or at least I hope we do - that we should not believe something just because it appears in print.  The Internet has been described as "The Wild West" because it has essentially no rules.  It is not a source of information.  Rather, it is many sources of information, some excellent, some quite useless, most in between.  The Web site snopes.com exists to help us separate Internet fact from fiction.

The more discerning among us know we should apply an ample measure of skepticism to more traditional sources of information, too.  The fact that something appears in The New York Times or The Washington Post does not mean it can be taken as truth.  Those publications do have rules about what gets into print, and editorial oversight, but they are far from infallible.

This is very notably so when news outlets and other publications for the general reader - the "popular press" - run articles on specialized subjects.  Ask any physician about the accuracy of reporting on topics in medicine, and you will get an earful.

But what of the publications doctors read for the latest scientific evidence: medical journals?  There are four major English-language journals that publish studies of interest to a broad range of physicians, across specialties: the Journal of the American Medical Association (JAMA), the New England Journal of Medicine, The Lancet, and the British Medical Journal (now officially called simply BMJ).  Are these prestigious publications rock-solid sources of reliable information?  In a word: no.

Many readers are now aware of the debacle involving The Lancet, in which a 1998 study linking autism to vaccines for children was retracted last year.  The lead author of that study, Dr. Andrew Wakefield, was guilty of grossly unethical conduct as a medical researcher.  Wakefield's appalling actions are far beyond the scope of this entry, but I encourage you to look it up and read about it.

I like to think studies published in some journals are well worth reading.  The foremost journal in my specialty, Annals of Emergency Medicine, is an excellent example.  But of course I would see it that way.  Annals is the official journal of the American College of Emergency Physicians (ACEP), and I have two relevant (but unrelated to each other) sources of bias.  I review manuscripts for Annals, and I serve on the Board of Directors of ACEP.  The fact that Annals was ranked one of the 100 most influential scientific journals of the last 100 years by the Special Libraries Association means it is noteworthy, not that it is perfect.

Doctors read medical journals that are "peer reviewed."  (For Annals, and for another major emergency medicine journal, I am one of many peer reviewers.) That means manuscripts submitted for publication are assigned to an editor, who then seeks review and comment from members of a large panel of experts.  Each manuscript is typically reviewed by 3-5 such experts.  Most manuscripts are reviewed and rejected.  Some are returned to the authors with extensive recommendations for revision, following which they may be resubmitted for further consideration.  Uncommonly manuscripts are accepted "as is" or with minor revisions.

Doesn't this process assure that any article that appears in a peer-reviewed journal offers reliable information that doctors can use to keep their medical practice up to date?  The peer-review process is valuable, but its "stamp of approval" does not mean the journal's subscribers can read an article, note the authors' conclusions about what their study proved, and take those conclusions to the bank.

So how do doctors learn to read medical journals and separate the good from the not-so-good?  Journal Club.

During residency, those years of specialty training after medical school, doctors have sessions, commonly monthly, during which articles from medical journals are reviewed and discussed.  At Journal Club, as these sessions are known, they learn from their faculty how to "dissect" a study and tell whether its findings are believable or important.  What question(s) did the investigators set out to answer?  Was their study well-designed to answer those questions?  Were their methods sound?  How about their statistical analysis of the results?  Is the article's "discussion" section a thoughtful and logical consideration of how the results should be interpreted?  Do the authors identify and describe any limitations that might affect the reliability of their study or the broad applicability of its findings?  Do they say they have discovered scientific truth, or do they admit further study is needed?  Do their data ultimately support their conclusions?

I have a confession to make: I live for Journal Club.  There is nothing in life - or at least in medicine - that gets my intellectual juices flowing like teaching residents how to take an article published in a peer-reviewed journal and discover that the peer reviewers were asleep at the switch, that the journal's editors should be ashamed of themselves for having let this one make it into print, that their own conclusion upon finishing their reading is that they have found a good bird cage liner - although, admittedly, the lead author's mother surely thought it was a fine study.  Occasionally, of course, we review a paper that is good science with robust conclusions that should influence the way we practice medicine.  It is important to be able to identify those studies, too.

But the good studies are in the minority, and the essential value of Journal Club is the opportunity to learn what not to believe - and why.

Residents, I hope you are paying attention.  Patients, if you didn't already know your doctor was busy during the hours not directly involved in seeing patients, add critical reading of the medical literature to the seemingly endless list of tasks.

Wednesday, May 4, 2011

On Queue

Do you remember what you were doing on May 25, 1977?

I do.  I was standing in line (or on queue, if you like the British expression, and I confess to being an Anglophile) waiting to buy tickets for the movie "Star Wars."

If you know me, you know this is a big deal.  I will stand on queue for very few things in life.  If a restaurant that doesn't take reservations says the wait for a table is more than 20-30 minutes, I'm gone.  I was only 19 when that movie was released, but this pattern of behavior was already well established.

Today is marked by Star Wars fans as a day of celebration of what has become a series of films because of the word play on the date: May the Fourth Be With You.  It brings back memories of that day and the excitement in the crowd of (mostly) college students waiting to buy tickets.

By the age of 19, I was already a critic of the cinema.  Each year when The Philadelphia Inquirer (known colloquially as "The Inky") released its list of the previous year's ten best films, I had seen them all and had considered opinions on whether the top ten list included any undeserving movies or had omitted any worthies.

The original "Star Wars" was a smashing success with critics, at the box office, and with the American Academy of Motion Picture Arts and Sciences.  With adjustment for inflation, it continues to rank highly (#4 according to Wikipedia) in all-time box office receipts.  It was nominated for eleven Academy Awards and won seven.  The critics raved about it, although I was, of course, the only critic who mattered.

Any film can be pigeonholed in a genre and judged in that context.  "Star Wars" belongs somewhere between science fiction and fantasy-adventure, two genres that are separated by no bright line.  There was not much in the fantasy-adventure genre of cinema back then, but there had been plenty of sci-fi films, and "Star Wars" wowed sci-fi fans with special effects the likes of which hadn't been seen before.  Today we take for granted visually stunning - and stunningly expensive - special effects that bring creative imagination to the big screen.  But in 1977 moviegoers were all in agreement that we had never seen anything that remotely approached this new production from George Lucas.

Reflecting upon that day nearly 34 years ago causes me to realize that, while there are numerous film categories I really enjoy, among them romantic comedies ("Sleepless in Seattle"), action movies (Clint, Arnold), classic dramas ("The African Queen"), and films that one can find only in art houses ("Harold and Maude," "Cousin, Cousine"), my love for fantasy-adventure clearly began with "Star Wars."  I think that is true for so many moviegoers.  The producers, directors, and actors of films in the Star Trek series, the Lord of the Rings trilogy, and Harry Potter should be grateful for the pioneering effort of 1977.  Not that the others could not have blazed a trail independently - goodness knows J.K. Rowling is an incomparable creative genius - but we owe much to the vision of George Lucas.

Monday, May 2, 2011

Wanted: Dead or Alive

That was the way George W. Bush described the official U.S. view of Osama bin Laden, mastermind of the terrorist attacks that sent airliners into the twin towers of the World Trade Center, the Pentagon, and a field in southwestern Pennsylvania on September 11, 2001, taking more than three thousand lives on American soil.

More than seven years later, the Bush Administration drew to a close with that warrant unfulfilled.  But last evening there was good news: a special forces operation, carefully planned on the basis of accurate intelligence, ended in a firefight in which bin Laden was killed in Pakistan.

"Ten years after his attack on the world's most powerful nation, Osama bin Laden remains at large."

We will not read that sentence in The New York Times or The Washington Post on September 11, 2011.

"The world is a better place ... safer ... because of the death of Osama bin Laden."  So intoned President Obama, with an air of confidence and certainty.

But just what effect will this apparently momentous event have on the future of Islamic fundamentalist terrorist actions targeting the United States in particular and the Judaeo-Christian West in general?  As with so many other world events, including some very notable ones of the last decade, there are those prepared to make immediate pronouncements in answer to that question.  Others, among whom I include myself, prefer the longer view which, while far less certain, recognizes that accurate predictions are quite difficult when so many contributing factors are unknown.  Recent struggles toward democracy in the Arab Middle East are only the most obvious of these.

Intense hostility on the part of some in Islam against the West goes back at least as far as The Crusades.  It intensified in the twentieth century with the Balfour Declaration in 1917 and the creation of Israel as an independent Jewish state in 1948.  High-profile acts of terrorism hardly began with Osama bin Laden's vision. The Palestinian Black September organization's attack on the Olympic Village in Munich in 1972 was for me the most memorable.

It may be that Islam is truly a religion of peace and that it is only a tiny minority of Muslims who believe they are fulfilling the will of Allah by killing as many infidels as possible.  And it may be that the death of Osama bin Laden will have a dramatic and long-lasting effect on the scope and the reach of Islamic fundamentalist terrorism around the globe.  But I am a student of history, and I believe it is a fool's errand to try to predict how this aspect of the twenty-first century will unfold.