A theory has only the alternative of being right or wrong. A model has a third possibility; it may be right, but irrelevant.
If you put tomfoolery into a computer, nothing comes out of it but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.
This is one of the most powerful tools available to science and engineering and, like all powerful tools, it brings dangers as well as benefits. Andrew Donald Booth said “Every system is its own best analogue”. As a scientist he should, of course, have said in what sense he means best. The statement is true in terms of accuracy but not in terms of utility. If you want to determine the optimum shape for the members of a bridge structure, for example, you cannot build half a dozen bridges and test them to destruction, but you can try large numbers of variations in a computer model. Computers allow us to optimise designs in ways that were unavailable in times past. Nevertheless, the very flexibility of a computer program, the ease with which a glib algorithm can be implemented with a few lines of code and the difficulty of fully understanding its implications can pave the path to Cloud Cuckoo Land.
The main hazards of computer modelling can be summarised under a few headings:
At almost every stage in the development of a model it is necessary to make assumptions, perhaps hundreds of them. These might or might not be considered reasonable by others in the field, but they rapidly become hidden. Some of the larger models of recent times deal with the interactions of variables whose very nature is virtually unknown to science.
In olden times, if a scientist published a theory, all the stages of reasoning that led to it could be critically examined by other scientists. With a computer model, it is possible, within a few days of development, for it to become so complex that it is a virtual impossibility for an outsider to understand it fully. Indeed, where it is the result of a team effort, it becomes unlikely that any individual understands it.
Often vital elements can be left out of a model and the effect of the omissions is only realised if and when it is tested against reality. A notorious example is the Millennium Bridge in London. It was only after it was built and people started to walk on it that the engineers realised that they had created a resonant structure. This could have been modelled dynamically if they had thought about it. Some models that produce profound political and economic consequences have never faced such a challenge.
The human subconscious is a powerful force. Even in relatively simple physical measurements it has been shown that the results can be affected by the desires and expectations of the experimenter. In a large computer model this effect can be multiplied a thousandfold. Naturally, we discount the possibility of deliberate fraud.
This word, which literally means falsification or adulteration, has come to mean advanced and efficient. In large computer models, however, the literal meaning is often more applicable. The structure simply becomes too large and complex for the inputs that support it.
When we were pioneering the applications of computer modelling about forty years ago, we soon came to the conclusion that a model is useless unless it can be tested against reality. If a model gives a reasonably accurate prediction on a simple system then we have reasonable, but not irrefutable, grounds for believing it to be accurate in other circumstances. Unfortunately, this is one of the truisms that have been lost in the enthusiasms of the new age.
Large models are often chaotic, which means that very small changes in the input variables produce very large changes in the output variables. Some very simple processes can amplify errors, taking the difference between numbers of a similar magnitude for example. The errors (or noise) are then propagated through the system. If there are feedback mechanisms present, it is quite possible for systems to operate on the noise alone.
Many of the computer models that receive great media coverage and political endorsement fail under some of these headings; and , indeed, some fail under all of them. Yet they are used as the excuse for profound, and often extremely damaging, policies that affect everyone.
That is why computer models are dangerous tools.
Back to FAQs