Thursday, November 19, 2009

Breast Cancer Screening

I haven't seen the actual data, so I am not going to opine on whether women from age 40-50 should get mammograms or not. But just in the one day that this discussion has been going on, I have seen so many classic decision making biases I can't resist discussing those.

1.Comparing statistical evidence and anecdotal evidence. No matter what the debate is about, there is always a great story that illustrates the benefits of one side or another. This is true whether your side has any merit or not. Anecdotes are a great political tactic, but should never be used to set policy. And yet, I have heard so many stories of the "42 year old woman named XXXX whose life was saved because she had a mammogram. Not only is she thankful for the screening, but so are her husband, teenage daughter, . . . ." Of course that will happen when millions of women are screened. But it is irrelevant to the policy decision.

2. Ignoring the costs of false positives. For every cancer that is found (true positive) there are millions of false positives. These false positives are real women too. They go through the fear that they have cancer, the pain of a biopsy, the hassle of the procedure, etc. It's not just about rationing care to lower costs. There are emotional costs as well. And there are many more false positives than there are true positives (see next comment).

3. Ignoring base rates: I have also heard the testimonials of women who say "Yes, I was afraid for a few days and had to undergo the pain and hassle of a biopsy. But it was worth it to find my cancer. So the cost benefit analysis is clearly in favor of screening." But what this argument fails to consider is that millions of women go through the fear, pain, and hassle for every cancer that is found. And in many cases, the cancer would have been found and treated even without the mammogram so even the true positives are not really true positives.

4. Validity bias. The people being quoted or interviewed vary tremendously on whether they are experts on the topic. For some reason, a cancer survivor is seen as an expert on breast cancer diagnosis or statistical analysis. Sorry, but having cancer does not make you an expert. Not even on the pain and suffering part because each woman's experience is different and your pain, while real, may not be typical.

5. Sample size bias. Also in these interviews, people toss in the results of studies. But if one study looked at thousands of women and another looked at dozens, they are not comparable. And yet, the discussion doesn't account for this.

6. Availability bias. It's much easier to think of the extreme cases (where a woman's life was saved by screening) than the more typical cases (false positives, minor cancers that would have been caught anyway, etc.). So the debate focuses on these salient stories instead of the real evidence.

7. Confirmation bias. Once a person decides which side they are on, they completely ignore all of the evidence to the contrary, even when it is being presented to them directly in a one-on-one discussion. It's amazing how good we are at this.

There are more too. But this is enough for today. Can we perhaps focus more on intelligent policy development instead of emotion-based policy development so we can actually create useful and effective policies? Anyone? Anyone?