The burden of proof is on a designer to prove that a system is safe,
not on the users to prove that a catastrophe is likely
In creating safe systems, there are two kinds of reasoning—first, trying to prove that a certain system is safe. Secondly, arguing that the system is susceptible to certain concrete dangers. These modes of reasoning are not logically equivalent. To deny the general statement that something, say the Space Shuttle, is safe only requires one counterexample. In the case of the Challenger disaster, we received a vivid display that a failed O-ring seal was enough to cause the craft to break up on reentry, causing tragic loss of life. However, refuting a particular criticism of safety—say, demonstrating that the Space Shuttle could sustain micrometeorite impacts in orbit—does not constitute a proof of the general case that the Space Shuttle is safe. There may be a thousand other threats, like the failed O-ring seal, which could cause the craft to break apart. However, our minds have a tendency to think as if a comprehensive refutation of all the threats we can think of at the time is sufficient to declare a piece of equipment “safe”. It may very well remain unsafe. Only through many years of repeated use and in-depth analysis can we truly verify if a system is safe with a high level of confidence. Empiricism is important here.
In analyzing the safety of a system, we must exercise vigilance to make sure that errors in reasoning have not compromised our ability to understand the full range of possible disaster scenarios, and not get too excited when we refute a few simple disaster scenarios we initially think of. In complex technical systems, there are always conditions of maximum load or design failure threshold, where if certain external or internal conditions are met, the system will become unsafe. Eventual failure is guaranteed, it's just a question of what level of environmental conditions need to be met for failure to occur. It is important to consider the full range of these possible scenarios and check our reasoning processes for signs of bias.
From a scientific point of view, it is always easier to prove that an object does exist, than to prove that an object conclusively does not exist. This effect is the same when it comes to safety—we can prove that a design is safe in general conditions, but we cannot conclusively prove that unusual conditions will not combine to threaten the safety of the design. The only way we can do that is through an extensive service lifetime, and even then, we can never be completely certain. The amount of rigor we put towards evaluating the danger of a new construct should be proportional to the damage it can cause if it undergoes critical failure or goes out of control. Large, important systems, such as nuclear reactors can do quite a lot of damage. The burden of proof is always on the designers to prove that a system is safe, not on critics to show that a system is unsafe. Steven Kaas put it pithily: “When you have eliminated the impossible, whatever remains is often more improbable than your having made a mistake in one of your impossibility proofs.” In other words, we often make basic mistakes in our reasoning, and should reevaluate disaster modes from time to time even if we think they are ruled out.