28. Ignoring a risk because of its
insignificance according to an expert
This ties into the above point. If an
expert thinks a risk is insignificant, he may be wrong. It is also necessary
for the expert to quantify the precise degree of insignificance they are
talking about, which they often refuse to do or are unable to do. For instance,
say that we determine that the probability that a certain particle accelerator
experiment destroys the planet is one in a million. That sounds low, but what
if they are running the same experiment a hundred times a day? After a year,
the probability of doom is already 1/300. After 500 or so years, the
probability of doom approaches unity. So, ̉one in a millionÓ may actually be a
quite significant risk if the risk is repetitive enough.
Aside from having a proper understanding
of insignificance, we also ought to keep in context the relationship between an
estimate of insignificance and the chance that the expert making the prediction
is in error. Say an expert says that an event only has a one in a billion
chance, but there is a 50% probability that they are completely wrong. In that
case, the real insignificance might be just one in two, or one in twenty. The
probability that an expert is just plainly wrong often throws off the estimate,
unless there is a rigorous statistical or empirical basis to confirm the
estimate. Even then, the empirical data may lie, or there may be an error in
the statistical calculations, or a mistaken prior.