**Publication bias**

Publication bias is a tendency *on average *to produce results that
appear significant, because negative or near neutral results are almost never
published.

It is possible to estimate the effects of publication bias, as long as we are prepared to indulge in a bit of guess work about the willingness of researchers to publish for various claimed levels of Relative Risk.

Let us assume that our researchers are investigating the influence of some factor on the incidence of a rare disease and that in reality the factor has no effect. Assume also from knowledge of the general population that for the given number of cases studied the expected number of incidents of the disease is ten (e.g. there are 10,000 people exposed to the factor and the known probability of getting the disease among the general population is 0.001). Then the results they would expect to get at random are derived from the Poisson distribution, and the density function is:

The average value is, of course, 10, which corresponds to the neutral RR of 1.0.

Now,
the guesswork part is to invent a plausible function that would represent the
willingness of authors to publish. Observing the literature in general, we know
that they are all willing to publish a RR of 2.0 but *almost* nobody will
publish an RR of less that 1.1. So a plausible function might be:

Now, to find the density function we might expect for works that are actually published, we multiply the two functions, ordinate by ordinate, and to make it a proper density function divide by the total area:

This then is density function representing the expected published results, under the assumptions made, when publication bias acts on a study where there is no real effect at all. Authors will calculate the significance of their results by comparing them with the first distribution above, whereas in reality they belong to the second distribution.

The
average value for this density function is **15.8**,
which means that a RR of **1.58** is achieved
(repeat: on the basis of **no real effect at all**) purely by the action of **publication
bias**.

The
headlines will, of course, say something like **Passive drinking causes a 58%
increase in toe-nail cancer** whereas the result is entirely spurious. This is
a major reason for never accepting RRs of less than 2.0.

**Footnote:**
Some readers have found difficulty with the concept of a prior distribution
making a difference to a single trial. Think of it as a single roll of a loaded
die compared with a fair one.