The D.I.Y. way of getting a probability estimate from your doctor

One frustrating thing about dealing with doctors is that they tend to be unwilling or unable to talk about probabilities. I run into this problem in particular when they’ve told me there is “a chance” of something, like a chance of a complication of a procedure, or a chance of transmitting an infection, or a chance of an illness lasting past some time threshold, and so on. Whenever I’ve pressed them to try to tell me approximately how much of a chance there is, they’ve told me something to the effect of, “It varies” or “I can’t say.” I sometimes tell them, look, I know you’re not going to have exact numbers for me, but I just want to know if we’re talking more like 50% or, you know, 1%? Still, they balk.

My interpretation is that this happens due to a combination of (1) people not having a good intuitive sense of how to estimate probabilities and (2) doctors not wanting to be held liable for making me a “promise” – perhaps they’re concerned that if they give me a low estimate and it happens anyway, then I’ll get angry or sue them or something.

So I wanted to share a useful tip from my friend, the mathematician who blogs at www.askamathematician.com, who was about to have his wisdom teeth removed and was trying unsuccessfully to get his surgeon to tell him the approximate risks of various possible complications from surgery. He discovered that you can actually get a percentage out of your doctor if you’re willing to just construct it yourself:

Friend: “I’ve heard that it’s possible to end up with permanent numbness in your mouth or lip after this surgery… what’s the chance of that happening?”

Surgeon: “It’s pretty low.”

Friend: “About how low? Are we talking, like five percent? Or only a fraction of one percent?”

Surgeon: “I really can’t say.”

Friend: “Okay, well… how many of these surgeries have you done?”

Surgeon: “About four thousand.”

Friend: “How many of your patients have had permanent numbness?”

Surgeon: “Two.”

Friend: “Ah, okay. So, about one twentieth of one percent.”

Surgeon: “I really can’t give you a percentage.”

20 Responses to The D.I.Y. way of getting a probability estimate from your doctor

  1. Brian Engler says:

    I agree. I’ve always found the quick-math-in-the-head thing to be a useful talent in life.

  2. I had the exact same surgery some weeks ago, with a very similar conversation:

    Surgeon: There’s some small risk of temporary or permanent nerve damage due to this procedure.

    Me: So what’s the probability of this permanent nerve damage?

    Surgeon: Oh, very small.

    Me: Well, if you had to make an estimate?

    Surgeon: (after some prolonged stalling, looking up numbers etc) One in 100 million.

    Me: That doesn’t sounds right…

    Eventually, I had the procedure done, but with another surgeon, and I asked again. The second time I got the estimate 1 in 10 000, though the second surgeon wasn’t sure if it was the odds for the temporary or permanent nerve damage..

    • alex says:

      Henrik, your experience is very reminiscent of the troubles Feynman had getting straight answers from NASA on failure probabilities when he was investigating the Challenger disaster. The following passage is cut-and-pasted from Further Adventures of a Curious Character:

      “All right,” I said. “Here’s a piece of paper each. Please write on your paper the answer to this question: what do you think is the probability that a flight would be uncompleted due to a failure in this engine?”

      They write down their answers and hand in their papers. One guy wrote “99–44/100% pure” (which was the slogan of Ivory Soap at the time), meaning about 1 in 200. Another guy wrote something very technical and highly quantitative in the standard statistical way, carefully defining everything, that I had to translate — which also meant about 1 in 200. The third guy wrote, simply, “1 in 300.”

      Mr. Lovingood’s paper, however, said,
      Cannot quantify. Reliability is judged from:
      • past experience
      • quality control in manufacturing
      • engineering judgment

      “Well,” I said, “I’ve got four answers, and one of them weaseled.” I turned to Mr. Lovingood: “I think you weaseled.”

      “I don’t think I weaseled.”

      “You didn’t tell me what your confidence was, sir; you told me how you determined it. What I want to know is: after you determined it, what was it?”

      He says, “100 percent” — the engineers’ jaws drop, my jaw drops; I look at him, everybody looks at him — “uh, uh, minus epsilon!”

      So I say, “Well, yes; that’s fine. Now, the only problem is, WHAT IS EPSILON?”
      He says, “10 to the -5.” It was the same number that Mr. Ullian had told us about: 1 in 100,000.
      I showed Mr. Lovingood the other answers and said, “You’ll be interested to know that there is a difference between engineers and management here — a factor of more than 300.”
      He says, “Sir, I’ll be glad to send you the document that contains this estimate, so you can understand it.”*

      Later, Mr. Lovingood sent me that report…Just about every nut and bolt was in there: “The chance that a HPHTP pipe will burst is 10 to the -7.” You can’t estimate things like that; a probability of 1 in 10,000,000 is almost impossible to estimate. It was clear that the numbers for each part of the engine were chosen so that when you add everything together you get 1 in 100,000.

    • Max says:

      This illustrates that doctors are NOT concerned about giving an estimate that’s too low. If they were, they’d give an upper bound of about 1%, not the lower bound.

  3. Max says:

    Yup, I had that conversation with an oral surgeon. His line was that a surgeon who says he never caused permanent nerve damage either hasn’t removed many wisdom teeth or is lying. He didn’t give a number though. But he was quick to estimate a greater than 50% chance of developing serious problems if the wisdom teeth are NOT removed. To determine that, he’d have to know how many people never developed problems and never needed to see him, but his experience is with patients who did develop problems, so there’s a selection bias.

    And watch out for the ecological fallacy. An individual case may be very different from the average. The closer the tooth is to the nerve, the greater the chance of nerve damage.

  4. jonathan Roberts says:

    I’m a surgeon. I try to be honest. The reality is we DON’T KNOW the real risk. Complications are painful for surgeons’ egos, so we are definitely biased when we look at our own results. . The risks seem higher to surgeons who have just had a complication, have recently been sued or are depressed. On the other hand, most surgeons see their results through rose colored glasses & tend to forget or excuse complications.

    The medical literature is not much help. Studies published by individual surgeons or single institutions invariably have fewer complications – Of course one wouldn’t publish unless one had stellar results. Large databases are usually incomplete, as surgeons typically don’t self report complications, so studies using that information typically underestimate risk too. If you use a surrogate (such as a prescription for an antibiotic in the postoperative period counted as an infection) complication rates appear higher, but this is hardly exact. Further, how do you define a certain complication. Wounds are frequently slightly red without an infection, but we will often prescribe an antibiotic to sooth an alarmed patient and “to be on the safe side.”

    Temporary nerve weakness after carotid endarterectomy is a good example. These are not uncommon. After that operation, I would always examine my patients for a lip droop, tongue deviation, hoarseness (marginal mandibular, laryngeal & glossopharyngeal nerve weakness respectively) and dutifully record the results in the chart and sort of apologize to the patient which made them very anxious.

    My partner’s patients often had these findings too, but no mention was made in the chart & patients were happily oblivious. When I confronted, my partner said I was insane to point out minor complications, creating lack of confidence in patients & referring physicians. He explained, correctly, that even primary physicians don’t know what to expect & make referrals based on superficial attributes such as the surgeon’s apparent confidence.

    Complete transparency can’t be obtained unless you have completely independent, medially trained observers examining each patient, reporting complications & compiling data. Good luck financing that!

  5. Ray Greek says:

    I agree with Jonathan Roberts and would like to add a few things.

    I am an anesthesiologist and have practiced in academia and in private practice. One question I was frequently asked was: “What is my risk of dying from the anesthesia?” The short answer is: “Unknown.” Retrospective studies have been performed but all suffer flaws and none really answers the question. But upon closer examination, the question itself is not a very good one despite sounding exceedingly reasonable.

    The patients I cared for in academia were, for the most part, sicker than the ones in private practice and were undergoing operations associated with more risk. Hence, the risk from dying from the surgery and or anesthesia was significantly higher. But even comparing apples to apples (the same operation in both settings), the sicker patients I saw in academia were at an increase risk for complications from the usual surgeries secondary to their overall health. Risk cannot be evaluated based solely on the operation. It must include other factors such as the patient’s overall health, skill and experience of the surgeon, presence of abnormal anatomy or physiology unknown at the time of the operation, competence of the hospital personnel or personnel in another setting where the procedure is being performed and so on. This complicates even an honest effort to answer the risk question.

    Moreover, assessing the independent risk from the anesthesia as opposed to the surgery is very difficult.

    Further, most studies evaluating risk (or anything else) are performed in academic settings. So even if studies were available, they would most likely be skewed because of the population being studied. Not always, but the studies in general would be questioned simply because of that fact. So just because a study concludes the risk for P complication is Q, that does not necessarily translate to your doctor in your hospital.

    But the studies are mostly not available. There are numerous reasons for this but the bottom line is that such studies simply have not been performed and or were not performed on a large enough population or a diverse enough population or suffer from myriad other problems that limit such studies. So, when a patient asks a very reasonable question about risk, the physician probably does not know the answer because the answer is not known. There are exceptions of course and some procedures have been well studied and some physicians, mainly surgeons, do keep very good records and know their complication rates, at least for the big major complications.

    Another issue is the fact that even when statistics are known and the physician explains them, most patients do not understand them. I would just gratuitously add here that in my experience the worst patients for understanding such things came to me from academia—people at the top of the educational food chain. Just because a person has a doctorate in whatever (science or humanities) does not mean he can understand and or accept the facts when he is the patient. Human nature plays a big role in all this but is not usually discussed. I feel for patients who are asking what I would also be asking were our roles reversed. But while part of the answer to this problem lies in better science and critical thinking education, another part lies in human nature and that aspect is unlikely to change.

    Then we have the issues Roberts raised: self-interest, rose-colored glasses, and perhaps more important, the fact that many of these complications must be self-reported. Unless society demands, in the form of actually allocating money for it, more accountability in the form of risk assessment, all of us will continue to be frustrated with the answers.

    Finally, even if a procedure has been well studied and has a reasonably known risk for complication X, if YOU get complication X, the complication for you is 100%. THAT is what most people care about. Many physicians have been sued for poor outcomes that were well within the statistical expectations, were not due to poor performance of the surgeon, surgical team, anesthesiologist etc, and that had been well explained to the patient and family. So even if your surgeon keeps excellent records for operation X and tells you that for complication Y the percentage is Z, you must still put that in perspective of a lot of other things. After doing all that, every one needs to accept the fact that outcomes (and life in general) are not perfect and bad outcomes do not mean someone screwed up and therefore should be sued.

    All of the above goes into answering the risk question.

    • Max says:

      Hey, aren’t you the SGU guest who argued that animal models have a low positive predictive value (PPV)? I was listening to that interview and yelling that what matters is the Bayes factor, not the PPV. In other words, if testing on animals increases the odds ratio from 1% to 10% (Bayes factor=10), then the testing is useful.

      • Ray Greek says:

        Hi Max
        Thanks for remembering me.
        The difference between PPV/NPV and Bayes, as they apply to this discussion, is largely about whether you are doing basic research or applied. If you are trying to (or claiming to) predict human response, then how close you get is important (Is this practice or endeavour or modality “predictive?”) whereas if you are merely trying to get closer to the right answer then such high probabilities are not required. Medicine in general requires high PPVs and NPVs as does the scientific definition of prediction otherwise astrology etc could, in theory, qualify as predictive (if they could get around the falsifiability problem). Animals are clearly used in drug testing, for example, to predict human response in the PPV and NPV sense of the word.
        If you want to discuss further, I suggest you read Are animal models predictive for humans? at http://www.peh-med.com/content/pdf/1747-5341-4-2.pdf and contact me through the website http://www.AFMA-curedisease.org.
        Thanks!

    • Julia Galef says:

      Thanks for weighing in, Mr. Greek — this is all very well put. The point about patients misinterpreting statistics is especially well taken. I’ve noticed that phenomenon in many other contexts — like when people are told that the mean of one population is a little higher than the mean of another population, they tend to behave as if ALL the members of one population are higher than ALL the members of the other population.

      But despite the fact that probability estimates would be flawed in all the ways you describe, it still feels like having those flawed estimates couldn’t possibly make you worse off than having NO estimate at all. You have to make a decision about what procedures you want to undergo, as a patient, so it seems like flawed information is inevitably better than no information at all, right?

      • Ray Greek says:

        The question of whether flawed information is better than no information is a topic for the Rationally Speaking podcast. I would argue that flawed information is actually worse but the entire topic lends itself to digressions to the level of metaphysics very quickly IMO.

        I weigh in on these discussions because although I was raised in very uneducated environment the people in that environment were good and honest and wanted answers to questions like: “What is my risk?” (The question is part of the broader issue of: “How do I choose a doctor?”) So, I empathize and try to “help out.” That having been said, this particular honest question intersects with areas where honesty is not highly valued, for example how will the answer to the risk question be interpreted in a court of law? It also raises the question of what does 1% risk mean. Many patients interpret that as meaning that if the complication occurs it will only be 1% as bad as normal. Trust me, there is no end of possible misinterpretations of this and lawyers will put forth all of them even when the patient fully understood the conversation about risk and consent. (Your example of “ALL” is germane.) So while I want to be fair to patients and give them information they can use, I also want to be fair to the doctors and doing both is problematic for reasons that may only peripherally involve the patient and or doctor.

        I no longer practice anesthesiology and was never sued when I was in practice so I am in a somewhat reasonable position to weigh in on these things—no current financial interest but no ax to grind either. Still, the issue is complicated. I suggest three things.

        1. Each doctor should be required to keep track of complication rates for routine procedures: colonoscopy, intubations, appendectomies, wisdom teeth removal and so forth. But! This will require time and other resources so the doctor’s compensation should be increased to reflect this. (This will not happen in today’s environment of cost containment and quality of care taking a subordinate role to cost containment.) I would also allow the doctor to break these complications down by categories such as patients with diabetes and patients without diabetes. (Diabetics have a higher rate of some complications than nondiabetics so we should allow the doctor and patient to compare apples to apples.)

        2. The complication records should be subject to review by outside auditors, preferably other doctors in the same specialty as the doctor in question. Again, this will not happen because of the money involved.

        3. Tort reform. (Good luck with that one!)

        I realize this will do absolutely nothing to help patients who go to the doctor tomorrow but this is not a simple problem and hence does not lend itself to a quick fix.

        Finally, allow me to explain how I did this “back in the day.” I told patients their risk fell into one of the following three categories. 1. I would not want to do this on my vacation but I would not lose sleep over it either. 2. Real people do get this complication and while it is rare if you are the kind of person that does not want to buy a car that is not perfect in every way and guaranteed for ten years you might want to reconsider all this. 3. This operation / anesthetic is risky. I can almost guarantee that you will suffer some complication although I cannot tell you which one. That is simply the nature of the operation and your illness. The situation you find yourself in a serious one.

        That seemed to work out well for most patients. Most were happy with it and I thought that most understood it. Sometimes prose works better than numbers.

        Thanks for bringing up the topic!! Society needs to discuss this issue more often and more fully.

      • Max says:

        Ideally, if you take all the cases where an expert says there’s a 1% chance of something occurring, it should occur in 1% of those cases.
        It’s a little ridiculous that people can find more info about the car they’re buying than about their doctor’s record. Heck, most patients don’t even know their anesthesiologist.

      • Ray Greek says:

        I agree Max! The following, I just read it on MSNBC, is IMO related to this discussion.
        http://today.msnbc.msn.com/id/42829175/ns/today-today_health/
        Bringing issues like this to the mainstream is important.

  6. Paul says:

    Back in college I was in a class on orchestration (music), and asked the professor about a parallel fifth progression in one of the students’ arrangements, whether it might be acceptable back in the Baroque era. The professor essentially responded that she can’t say, unless I bring a Baroque piece to her, in which case we could analyze it together. The discussion unfortunately spiraled the wrong way and got somewhat heated. She couldn’t understand why I wasn’t happy with the answer.

    This may not sound related to getting probability estimates from doctors, but I definitely think it is. My problem with the professor’s answer at the time was that, although technically correct, her answer did not help me.

    So yes, it is very difficult to construct a very good probability estimate, but when a doctor says that he can’t give one, it doesn’t help his patient at all, despite that this is probably the most technically correct answer he could give.

    It’s unfortunate the fact that professors/doctors, being professional and knowing so much, sometimes turn around and inhibit them from being helpful. It might be good for them to consider giving up some of the precision and generality, depending on the situation, so that they can give the not-so-correct but much-more-helpful responds.

    Or the students/patients could change the questions in a way so that they might get more informative answers, like the Mathematician has done. Though I think he could’ve stopped after getting the two numbers, and not reconstructing the percentage, to accomodate his surgeon friend. Afterall, the ~1/20 % number is just the track record of one surgeon, and not a percentage of chance of whatever happening.

    • Ray Greek says:

      As a physician, I would be happy to comply with your suggestion Paul! Now get the lawyers to go along with it.

      • Paul says:

        Yes, there must be a way to do what I suggest, while getting lawyers to help with the “how” so that one can do it without getting into trouble.

    • Julia Galef says:

      Good points, Paul — but isn’t the track record of this particular surgeon a better guide to my friend’s chances of complications than the overall rate of complications? Given that he’s going to have the procedure done by this surgeon, after all.

  7. JT Eberhard says:

    Love it. Will use.

    JT

  8. James says:

    Rough estimate, there may be 10,000 other surgeons who’ve averaged 4,000 surgeries each. Without any other information about what “a chance of permanent numbness” means, it’s reasonable to start with equal probabilities – so 50% in this case – and update them as data become available. Add in this surgeon’s data and we have 20,000,002 failures out of 40,004,000 surgeries, or a 49.995% chance of permanent numbness. (I believe Bayes’ rule would lead to the same result, but I always screw it up…) At this point, the surgeon will hopefully try to provide a better estimate if possible.

    That was sort of tongue-in-cheek, but if nobody is keeping tabs and publishing summaries of this stuff, it’s almost fair to say each surgery is a flip of the coin – or just go by the surgeon’s record if there is enough data. 2 failures out of 4,000 would be essentially impossible if the 50% prior had been realistic, but what about less-common procedures where they’ve had 1 failure in 8 attempts?

    • Max says:

      That doesn’t look right. The Bayesian approach would start with alternate prior hypotheses for the surgeons. Like, a uniform distribution says that surgeons are all over the map, so a surgeon with a 0.1% rate of failure is as likely as a surgeon with a 99.9% rate of failure, whereas a normal distribution says that surgeons cluster around some rate of failure with rare outliers. Then, if the data for a dozen surgeons looks normally distributed, you start to favor that hypothesis.

Leave a Reply to PaulCancel reply

%d