91. Problems with selection of experts

On each separate question related to global risk—biotechnology, nanotechnology, AI, nuclear war, and so on—we are compelled to rely on the opinions of the most competent experts in those areas, so it is necessary for us to have effective methods of selecting which experts are most trustworthy. The first criteria is usually the quality and quantity of their publications—citation index, publication ranking, recommendations from other scientists, web traffic from reputable sources, and so on.

Secondly, we can evaluate experts by their track record of predicting the future. An expert on technology who does not make future predictions, even if qualified predictions made only a year or so in advance is probably not a real expert. If their predictions fail, they may have a poor understanding of the subject area. For instance, nanotechnologists who predicted in the 1990s that a molecular assembler would be built “around 2016” have been proven to be mistaken, and have to own up to that before they can be taken seriously.

A third strategy is to simply not trust any expert, and to always recheck everyone's calculations, either from first principles or based on comparisons to other expert claims. Lastly, it is possible to select people based their views pertaining to theories relevant to predicting the future of technology—whether they have an interest or belief in the technological Singularity, or Hubbert's peak oil theory, a neoliberal model of the economy, or whatever. It is possible to say “an expert should not have any concrete beliefs,” but this is false. Anyone who has been thinking about the future of technology must eventually make ideological commitments to certain patterns or systems, even if they are just their own, or it shows that they have not thought about the future in detail. An “expert” who never goes out on a limb in prediction is indistinguishable from a non-expert or random guesser.