Home
|
<< Book Book >>
Calculated Risks: How to Know When Numbers Deceive You
by: Gerd Gigerenzer
Topics include:
More
info & price
First Sentence: During a routine medical visit at a Virginia hospital in the mid-1990s, Susan, a 26-year-old single mother, was screened for HIV.
Amazon.com
In the tradition of Innumeracy by John Allen Paulos, German scientist Gerd Gigerenzer offers his own take on numerical illiteracy. "In Western countries, most children learn to read and write, but even in adulthood, many people do not know how to think with numbers," he writes. "I focus on the most important form of innumeracy in everyday life, statistical innumeracy--that is, the inability to reason about uncertainties and risk." The author wisely uses concrete examples from the real world to make his points, and he shows the devastating impact of this problem. In one example, he describes a surgeon who advised many of his patients to accept prophylactic mastectomies in order to dodge breast cancer. In a two-year period, this doctor convinced 90 "high-risk" women without cancer to sacrifice their breasts "in a heroic exchange for the certainty of saving their lives and protecting their loved ones from suffering and loss." But Gigerenzer shows that the vast majority of these women (84 of them, to be exact) would not have developed breast cancer at all. If the doctor or his patients had a better understanding of probabilities, they might have chosen a different course. Fans of Innumeracy will enjoy Calculated Risks, as will anyone who appreciates a good puzzle over numbers. --John Miller
From Publishers Weekly
If a woman aged 40 to 50 has breast cancer, nine times out of 10 it will show up on a mammogram. On the other hand, nine out of 10 suspicious mammograms turn out not to be cancer. Confused? So are many people who seek certainty through numbers, says Gigerenzer, a statistician and behavioral scientist. His book is a successful attempt to help innumerates (those who don't understand statistics), offering case studies of people who desperately need to understand statistics, including those working in AIDS counseling, DNA fingerprinting and domestic violence cases. Gigerenzer deftly intersperses math lessons explaining concepts like frequency and risk in layperson's terms with real-life stories involving doctors and detectives. One of his main themes is that even well-meaning, statistically astute professionals may be unable to communicate concepts such as statistical risk to innumerates. (He tells the true story of a psychiatrist who prescribes Prozac to a patient and warns him about potential side effects, saying, You have a 30 to 50 percent chance of developing a sexual problem. The patient worries that in anywhere from 30% to 50% of all his sexual encounters, he is going to have performance problems. But what the doctor really meant is that for every 10 people who take Prozac, three to five may experience sexual side effects, and many have no sexual side effects at all.) All innumerates buyers, sellers, students, professors, doctors, patients, lawyers and their clients, politicians, voters, writers and readers have something to learn from Gigerenzer's quirky yet understandable book.
Copyright 2002 Cahners Business Information, Inc.
From the New England Journal of Medicine, January 2, 2003
The father of modern science fiction, H.G. Wells, is reported to have predicted at the beginning of the 20th century that "statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write." Calculated Risks was motivated by a cognitive scientist's interest in why most people appear to be unable to reason about uncertainties and risk, a limitation Gigerenzer refers to as "statistical innumeracy." Physicians are often aware of their innumeracy. What they may be less aware of is how simple adjustments in the way in which numerical information is presented and the development of intuitively understandable illustrations can help to shift "innumeracy into insight." One barrier to understanding numbers is our seeming inability to live with uncertainty. Using the familiar examples of screening for breast cancer, testing for the human immunodeficiency virus, and DNA fingerprinting, Gigerenzer points out our nearly universal tendency to create an "illusion of certainty." He describes three distinct forms of innumeracy, which he refers to as ignorance of risk (in which a person does not know, even approximately, how large a personally or professionally relevant risk is), miscommunication of risk (in which a person knows the risks but does not know how to communicate them effectively), and clouded thinking (in which a person knows the risks but draws incorrect inferences from the relevant statistical facts). For example, physicians often know the performance characteristics of a diagnostic test (e.g., mammography) and the prevalence of a disease (e.g., breast cancer), but they may not know how to infer from this information the likelihood that the disease is present in a patient with a positive test result (e.g., the risk of breast cancer in a woman with an abnormal mammogram). For each of the three distinct forms of innumeracy, there is a tool to facilitate improved thinking. Most of the book focuses on the presentation of "mind tools" that are easy to learn, remember, and apply in the effort to overcome innumeracy. These tools focus on ways to overcome the illusion of certainty, devices for communicating risk intelligibly, and the use of natural frequencies for drawing inferences from statistical information. An important consequence of innumeracy is that miscommunication of risk is often the rule rather than the exception. Three major types of risk that invite miscommunication are single-event probabilities, relative risks, and conditional probabilities. Unfortunately, all of these are standard ways to communicate information. Single-event probabilities can lead to miscommunication because people tend to fill in different reference classes. This type of miscommunication happens frequently with mundane statements such as those made in weather reports: hearing that "there is a 30 percent chance that it will rain tomorrow," some people think that it will rain 30 percent of the time, others that it will rain in 30 percent of the area, and still others that it will rain on 30 percent of the days that are like tomorrow. Although the third option is the intended message, approximately two thirds of the people will interpret this statement incorrectly. One of the most common means of describing clinical benefits in the world of medicine and public health is the relative risk reduction. Since relative risks are larger numbers than absolute risks, results presented in this manner appear to be greater than the same results presented as absolute risk reductions. Presenting benefits as absolute benefits or in terms of the number needed to treat to save one life are two simple examples of ways to make results more understandable. Finally, information in the form of conditional probabilities is often misinterpreted. Even highly educated professionals have difficulty making key inferences on the basis of probabilities. The statement "If a woman has breast cancer, the probability that she will test positive on a screening mammogram is 90 percent" is often confused with the statement "If a woman tests positive on a screening mammogram, the probability that she has breast cancer is 90 percent." Creative representation is an indispensable part of solving problems and of using different formats to represent probabilistic information. For example, changing risk representations from probabilities to natural frequencies can be enormously useful. Probabilities -- especially conditional probabilities -- tend to impede human inference, whereas natural frequencies demand less computation, are far more similar to the ways in which we experience numerical information in our daily lives, and appear to help both experts and laypeople. The representation does part of the reasoning, taking care of the multiplication the mind would have to perform if provided only with probabilities. Algebra, geometry, and calculus teach thinking in a world of certainty. Medical schools and law schools routinely teach some form of statistics but generally have not integrated formal education on reasoning on the basis of uncertain evidence into their curriculum. Gigerenzer, the director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin, Germany, calls for an educational campaign aimed at teaching schoolchildren, undergraduate and graduate students, ordinary citizens, and professionals how to deal with risk. The topics he writes about are not new and have been the subject of a wealth of literature in recent years. The unique value of his book lies in the practical and simple tools it provides to help readers understand risks and communicate them effectively to others. These tools are easy to learn and should be mastered by every medical student, health care provider, and professional who is in the position of having to understand and explain to others choices involving risks and uncertainties. Sue Goldie, M.D., M.P.H.
Copyright 2003 Massachusetts Medical Society. All rights reserved. The New England Journal of Medicine is a registered trademark of the MMS.
From Booklist
Gigerenzer effectively proves Mark Twain's adage about lies, damned lies, and statistics in this fascinating, frequently startling study of the ways numbers are manipulated and misrepresented. If Gigerenzer's reasoning is complex, his examples are all too familiar as he offers unsettling alternative perspectives on the reliability of AIDS tests, the usefulness of breast cancer screening, and the accuracy of DNA matches. He demonstrates that margins of error are often much greater than the general public is led to believe, and that many 100-percent claims are far from it. After shocking readers into such enlightenment, he shows how the same statistical rules apply on a more abstract level by using story problems to reveal the fundamental deceptiveness of odds. Throughout, his wit and humor transform what could have been a turgid academic exercise into an intriguing lesson from a master teacher. As a bonus, he even tells us how best to avoid winning the goat on Let's Make a Deal. If only all math courses were so practical. Will Hickman
Copyright American Library Association. All rights reserved
About the Author
Gerd Gigerenzer is director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin, Germany. He has taught at several universities, including the University of Chicago and the University of Virginia, and has been a Fellow at the Center of Advanced Study in the Behavioral Sciences at Stanford University.
Reviews:
Help for the Statistically Innumerate This is a serious and worthwhile explanation of how probabilities are manipulated -- in the law, in medicine, and by advertisers. It covers a lot of the same ground as "Innumeracy" and "How to Lie with Statistics" but it is less jokey and more serious in import. The author's principal argument is that probabilities are more understandable if given in terms of natural frequencies. The material is often technical, but it is written for the educated layperson and the explanations are fully understandable with some effort. Some of the author's conclusions about the benefits of certain medical tests -- mammograms and prostate tests, for example -- may be controversial. If you are ever flustered by probabilities, this is a very good place to get some grounding in the subject.
Miscalculations or Misinterpretations? This is perhaps the best book at simply explaining the statistics of risk and uncertainty I have run across. I have even used what the author calls the illusion of certainty in analyzing the highest and best use of real estate. This book shows how medical experts and criminologists can be misled, not so much by innumeracy, as by what might better be called an illusion of expertise. Experts in any field may find this book useful in view of the U.S. Supreme Court's Daubert Rule that expert courtroom testimony must follow the scientific method.
A couple of caveats are in order however, and they are, shall we say, doozies. Gigerenzer states that there is ample evidence that smoking causes lung cancer. But he fails to consider why people from Asian and Pacific-Island cultures have some of the highest smoking rates in the world, but some of the lowest cancer rates. Any why do longitudinal studies show that people from these same cultures have much higher rates of cancer once they migrate to modern countries? Is it diet, smoking, a combination of the two, or something else that "causes" cancer? Likewise, Gigerenzer states that there is strong evidence that secondhand smoke is harmful to health. But he fails to mention the cardinal rule of toxicology: the dose, or concentration of a substance, makes the poison, not the substance itself. It is only in modern energy efficient air-tight buildings that smoke can be sufficiently concentrated so as to become an irritant, let alone a perceived health hazard. Thus, it may not be secondhand smoke, but the environment of tight buildings that is the source of the problem.
Thus, Gigerenzer fails to point out that all statistics and numbers must be actively interpreted and are relative in meaning to the interpreter. This involves a social filtering process not discussed in the book. Also, government may legitimate some health and crime statistics, when they may be bogus. As an aficionado of Gigerenzer's books, maybe he will write a sequel on the interpretation, misinterpretation, and social and political construction of statistics.
The truth about, fingerprints, DNA, AIDS, legal drugs, and so much more (Amazon.com) book does not discuss statistical innumeracy from the IT perspective, but discusses innumeracy mainly in contemporary medicine, the justice system, and life in general.
Gerd describes four aspects of innumeracy as follows:
01) Illusion of certainty:
For example: Fingerprint and DNA testing.
02) Ignorance of relevant risks:
For example: "It is more likely that a young American male
knows baseball statistics than that his chances of dying on
a motorcycle trip is about 15 times higher than his chances
of dying on a car trip of the same distance."
03) Miscommunication of risks:
For example: One can communicate the chances that a test
will actually detect a disease in various ways ... The most
frequent way is in the form of a conditional probability: If
a person has cancer, the probability the he/she will test
positive on a screening is 90 percent. Many physicians
confuse that statement with this one: If a person test
positive on a screening, the probability that he/she has
cancer is 90 percent.
04) Drawing incorrect inferences from statistics:
For example: "Consider a newspaper article in which it is
reported that men with high cholesterol have a 50 percent
higher risk of heart attack. The figure of 50 percent
sounds frighting, put what does it mean? It means that out
of 100 fifty-year-old men without high cholesterol,
about 4 are expected to have a heart attack within ten years,
whereas among men with high cholesterol this number is 6. The
increase from 4 to 6 is the relative risk increase, that is,
50 percent. However. if one instead compares the number of
men in the two groups who are not expected to have heart
attacks in the next 10 years, the same increase in risk is
from 96 to 94, that is, about 2 percent (absolute risk). Now
the benefit of reducing one's cholesterol level no longer
looks so great."
Far from being a dry book on risk, uncertainty, and statistics, Gerd Gigerenzer is entertaining, provocative, irreverent and a bit of a maverick
.
" ... 1 out of every 90 Americans will lose his or her life in a motor vehicle accident by the age of 75. Most of them die in passenger car accidents."
" ... the terrorist attack on September 11. 2001, cost the lives of some 3,000 people. The subsequent decision of millions to drive rather than fly may have cost the lives of many more."
"... DNA ... match probability of 1 in 16 for a brother ... "
This book provides "tools for overcoming innumeracy that are easy to learn, apply, and remember."
This book illustrates two important concepts very well: Statistics confuse even intelligent people, and the meaning of "false negative" and "false positive" data, especially when reported as percentages, can be far from intuitive.
Why only three stars? Both of these ideas are thoroughly illustrated and then beaten to death by page 50 of this 300 page book. (You can get most of the information from reading one or two of the other reviews here on Amazon).
The remainder of the book uses various medical examples to make the point that a percentage of a percentage may sound more significant than it is (or less significant than it is). As Gigerenzer illustrates, doing the arithmetic to determine the actual numbers of each case represented will untangle most misunderstandings. After about a dozen of these, though, only a reader with an interest in the specific examples will remain engaged.
The writing is clear, the examples are all good, and the book does amply illustrate the quotation cited in Mark Twain's Autobiography: "There are three kinds of lies: lies, damned lies and statistics."
How to interpret test results better than your Doc! This is a very clearly written book. It demonstrates many numerical errors the press, the public, and experts make in interpreting the accuracy of medical screening test (mammography, HIV test, etc...) and figuring out the probability of an accused person being guilty.
At the foundation of the above confusions lies the interpretation of Baye's rule. Taking one example on page 45 regarding breast cancer. Breast cancer affects 0.8% of women over 40. Mammography correctly interprets 90% of the positive tests (when women do have breast cancer) and 93% of the negative ones (when they don't have breast cancer). If you ask a doctor how accurate this test is if you get a positive test, the majority will tell you the test is 90% accurate or more. That is wrong. The author recommends using natural frequencies (instead of conditional probabilities) to accurately interpret Baye's rule. Thus, 8 out of every 1,000 women have breast cancer. Of these 8 women, 7 will have a positive mammogram (true positives). Of, the remaining 992 women who don't have breast cancer, 70 will have a positive mammogram (false positives). So, the accuracy of the test is 7/(7+70) = 10%. Wow, that is pretty different than the 90% that most doctors believe!
What to do? In the case of mammography, if you take a second test that turns positive, the accuracy would jump to 57% (not that much better than flipping a coin). It is only when taking a third test that also turns positive that you can be reasonably certain (93% accuracy) that you have breast cancer. So, what doctors should say is that a positive test really does not mean anything. And, it is only after the third consecutive positive test that you can be over 90% certain that you have breast cancer. Yet, most doctors convey this level of accuracy after the very first test!
What applies to breast cancer screening also applies to prostate cancer, HIV test, and other medical tests. In each case, the medical profession acts like the first positive test provides you with certainty that you have the disease or not. As a rule of thumb, you should get at least a second test and preferably a third one to increase its accuracy.
The author comes up with many other counterintuitive concepts. They are all associated with the fact that events are far more uncertain than the certainty that is conveyed to the public. For instance, DNA testing does not prove much. Ten people can share the same DNA pattern.
Another counterintuitive concepts is associated with risk reduction. Let's say you have a cancer that has a prevalence of 0.5% in the population (5 in 1,000). The press will invariably make promising headline that a given treatment reduces mortality by 20%. But, what does this really mean? It means that mortality will be reduced by 1 death (from 5 down to 4). The author states that the relative risk has decreased by 20%; but, the absolute risk has decreased by only 1 in 1,000. He feels strongly that both risks should be conveyed to the public.
The author shows how health agencies and researchers express benefits of treatments by mentioning reduction in relative risk. This leads the public to grossly overstate the benefits of such treatment. The author further indicates how various health authorities use either relative risk or absolute risk to either maximize or minimize the public's interpretation of a health risk. But, they rarely convey both; which is the only honest way to convey the data.
If you are interested in this subject, I strongly recommend: "The Psychology of Judgment and Decision Making" by Scott Plous. This is a fascinating book analyzing how we are less Cartesian than we think. A slew of human bias flaws our own judgment. Many of these deal with other application of Baye's rule. The author presents some important observations about calculated
risks, probabilities and statistical test inferences. He makes
clear the necessity to understand risks clearly at the outset
of any important decision. For instance, a physician must take
into consideration "false positive " test results so that
he/she does not over-react. An over-reaction could cause the
physician to take unnecessary precautions that could do more
to endanger the patient than help. In addition, the author
cautions against fabrication of certainty or the use of
statistics to prove a predetermined result. This book is
useful in arriving at a realistic design for a statistical
test or any other test from which an important scientific
inference will be made.
|