iOS app Android app More

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors
David Katz, M.D.

GET UPDATES FROM David Katz, M.D.

Illumination vs. Irradiation: Informing Medical Decisions With Help From Your Inner Statistician

Posted: 08/22/11 09:07 AM ET

My colleagues at the Huffington Post recently did a fine job characterizing the trade-offs that bedevil decisions about the use of modern medical imaging. On the one hand, sophisticated imaging with CT scanners can inform treatment choices, and save lives. On the other, rates of imaging have sky-rocketed over recent years, raising concerns about radiation exposure for patients and for doctors, that invention may be acting as the mother of necessity!

The challenge, of course, is finding the sweet spot between damned if you do, damned if you don't. Being overly cautious about medical testing -- imaging or otherwise -- could mean missing out on the very test you need to answer a critical question, or guide a crucial therapy. Being overly accepting of every application of modern medical technology might well mean it does you more harm than good -- making you glow in the dark, rather than illuminating the source of your troubles.

Up to a point, getting such decisions right requires trust in your doctor -- because they are decisions your doctor has been trained to make. Often that trust will be warranted -- and ideally it is in your case. Ideally, your doctor is not only highly educated, but equally intelligent and genuinely caring. With my many colleagues in mind, I can certainly say this is true much of the time. I can also say it isn't invariably true, and even the best of us have bad days. So I invoke my favorite Reaganism: Trust, but verify.

Verify you are getting the tests you need, and only the tests you need. Challenging such decisions is not an insult to your doctor. Simply, it is an affirmation of the fact that you, literally, have skin in the game: It's your body and your health on the line. Take nothing for granted.

In her Huffington Post column about diagnostic imaging, Emma Gray makes the following, reasonable suggestions. Before getting any type of scan, ask: How will this improve my care? Are there any alternative imaging exams that don't use radiation?

I like these tips, but I would like to go further and generalize them. Radiation is not the only hazard of medical testing -- any test can do harm of some kind. That's acceptable if, and only if, potential harm is much outweighed by potential benefit. To ensure that, always ask these questions before any test:Wwill the results of this test directly affect your decisions, or my options? Will this test provide a definitive answer, or is it preliminary to more tests? Is this test the safest way to get the information we need? Would you have this test if you were me?

If you can bring yourself to ask these questions about medical testing as a matter of routine, they should serve as a fairly good filter, letting only genuinely useful testing through. But with a little help from your inner statistician (yes, s/he's really in there!) you can do even better.
The goal of medical testing is to figure out what is going on (and then, what to do about it). That, in turn, is really dependent on establishing two things: What does this patient have and what doesn't this patient have? Testing is about confirming a diagnosis (ruling it in) and excluding all the rest (ruling them out). Ideally, it leads to ruling out everything so you can get that proverbial clean bill of health.

There are two, simple statistical concepts you should (and can!) master so that you can help guide testing toward ruling in what it is and ruling out what it isn't. The concepts are sensitivity and specificity.

In life, sensitivity is noticing and reacting to every little thing. It's not much different in medicine: It's the capacity of a test to detect a condition when it is really there. In the two-by-two table below, it is [a/(a+c)].

2011-08-19-Screenshot20110819at3.22.48PM.png

The table summarizes the universe of diagnostic possibilities into four quadrants: disease is present and the test finds it (cell a); disease is absent, but the test says it's present (false positive, cell b); disease is present and the test fails to find it (false negative, cell c); disease is absent and the test says it is absent (cell d). Sensitivity is, in essence, the percentage of the time that disease is present (cell a plus cell c) that the test finds it (cell a); thus, [a/(a + c)].

Here's the surprise: Although sensitivity is the measure of how reliably a test finds a condition that's actually there, it's the property a test needs to rule disease out! Here's why:

If a test is highly sensitive, it will almost always be positive when disease is truly there. Therefore, if a test is highly sensitive, it will almost never be negative when disease is truly there. A highly sensitive test will almost never be negative unless disease truly isn't there. And thus, a negative result on a highly sensitive test reliably rules out disease. The corollary, of course, is that a negative result from a test that is not highly sensitive -- whether or not it is highly specific -- does not reliably rule out disease!

Imagine the stunned expression on your doctor's face when s/he says: Llet's get this test just to make sure you don't have X ..." And you reply: "I trust, then, that this is a highly sensitive test for X?" I would love to be there when it happens!

On the flip side, specificity is the capacity of a test to exclude what truly isn't there. In the two-by-two table, that's [d/(b+d)]. The explanation is much as before, so I won't belabor.

Again, it is somewhat counterintuitive, but it's a test that is good at ruling out what isn't there that is needed to make a diagnosis! The logic is much as before:

A highly specific test is almost always negative when disease is truly absent. A highly specific test will almost never be positive when disease is truly absent. A highly specific test will almost only be positive when disease is actually present. And thus, a positive result on a highly specific test reliably rules a diagnosis in. Again, we have the logical corollary: A positive result from a test that is not highly sensitive does not rule a diagnosis in -- it merely suggests it. So before being treated for a particular condition, particularly if the treatment is apt to be unpleasant or dangerous, you would be well within your rights to ask: Was the testing this diagnosis is based on highly specific?

You are, I am confident, more than capable of mastering and using these simple, statistical principles. But I also recognize that some of you break into a cold sweat at the mere sight of anything that recalls high school algebra! For those in that group, here's your shortcut: SPin/SNout. Specificity to rule in, sensitivity to rule out.

There are more useful, simple statistical principles where these came from -- but they can wait for another day, and another column.

For now, we should recognize that medical testing used well should be illuminating. Used badly, it might simply be ... irradiating. Trust your doctor -- but on behalf of your skin, verify! Bring a few good questions -- and your inner statistician -- to your next doctor's visit to help ensure you stay in the sweet spot.

-fin

Dr. David L. Katz; www.davidkatzmd.com
www.turnthetidefoundation.org

 

Follow David Katz, M.D. on Twitter: www.twitter.com/DrDavidKatz