By: Sergio R. Concha, Technical Editor
[Editor’s note: this opinion piece originally appeared in Vol 20 No 6 (Q3 1984) of Hazard Prevention (now Journal of System Safety). The text has not been modified except for the addition on a title, formatting changes, images, and hyperlinks]
Technological advances in this century have changed the way of life of humans with unprecedented speed. The built-in “self-preservation” instinct of humans (our ability to detect and evade danger) is lagging behind the advances made by technology.
Television programs, newspaper articles, motion pictures, concerned citizens, politicians, scientists and self appointed prophets debate incessantly on whether cigarettes are that harmful, or if nuclear electrical power generation is a “safe” policy to follow, or if chemicals, combustion products and waste are polluting the earth, water and air to a point where life will be extinguished and, of course, whether nuclear armaments should ever be produced.
All these debates have produced an awareness in the general public of danger from unseen and little understood causes and may be close to producing mass hysteria. The public, in general, wishes that these evils be stopped.

Governments, which theoretically acquiesce to the public’s wishes, are taking the role of being protectors of humans by devising means by which we are warned of dangers. As a consequence, human safety and health decisions are being institutionalized by governments. The determination of “what is safe” and “what is an acceptable risk” are being made by anonymous technocrats in private business and government. The public in general is never asked whether he or she is willing to assume a risk Projects are presented to the public as completed facts. This method of presenting projects to the public has polarized public opinion in two broad sectors. There are those which advocate the return to caves, and there are those which advocate that quantitative acceptable risk levels be developed. Both approaches are impossible to achieve without socio-economic upheavals.
The latter approach, even though it appears logical at first glance, is insidious to the point of being unacceptable because a few technologists with esoteric knowledge will dictate to the rest of humanity “what is safe” and “what is not safe.” Freedom of choice will be destroyed. The technocrat will be all powerful and insulated from responsibility by anonymity. The solution to this problem is to inform the public and allow the public to influence the decision making process.
The article in this issue, “Keeping an Eye on Our Nukes,” is a novel experiment by which the public had a role in influencing the way the data was presented to the decision makers The authors acknowledge that the method used is inefficient and full of perils, but at least it is a start to involve the public in deciding “what is safe for them.”

This experiment of public involvement had two features not found in similar projects. The first is that the institution making the study (the Federal Government in this case) provided funds to the public to hire third party independent scientists to verify that the study was unbiased. The second feature is that the public could and did make corrections in the study when the study was in error. The System Safety Society through its members should try to be a source of third-party independent scientists which can inform the public and the institutions on “safety risks” so that decisions are made by informed people. After all, System Safety is a predictive discipline and we know how to predict hazards.
