When people hear a claim like “air pollution causes asthma” or “coffee reduces the risk of heart disease,” it can sound convincing at first. However, proving that one thing truly causes another is not so simple. In science, and especially in public health, cause and effect cannot be established by a single study or casual observation. It requires careful research, rigorous methods, and a structured way to weigh evidence.
This is where the Bradford Hill criteria come in. Created by British medical statistician Austin Bradford Hill in the mid twentieth century, this set of guidelines helps researchers evaluate whether a relationship between two factors is likely to be causal. Hill originally used these principles to study the link between cigarette smoking and lung cancer, but they have since been applied to countless other health questions. From environmental risks to new medical treatments, these criteria remain important in modern epidemiology.
Temporal Relationship. Cause Must Come Before Effect
The first and most fundamental principle is timing. For a factor to be considered a cause, it must come before the effect. If researchers claim that a high sugar diet leads to diabetes, they must show that increased sugar consumption occurs before the onset of the disease. If the order is reversed, the claim cannot hold.
This rule may seem obvious, yet in practice it requires careful design of studies. Researchers use cohort studies, where people are followed over time, to ensure that the exposure is measured before the outcome develops. In the case of smoking and lung cancer, scientists showed that people who smoked regularly developed cancer years later, not the other way around. Without this temporal sequence, even the strongest associations can be misleading.
Strength of Association. How Strong Is the Link?
Once timing is clear, the next question is how strong the association is. This is measured using statistical methods such as relative risk or odds ratios. A large and consistent difference between exposed and unexposed groups makes it harder to dismiss the finding as coincidence.
Consider the example of seat belt use and car crash survival. Studies consistently show that people wearing seat belts are far less likely to die in a crash compared to those without. The effect is large, clear, and unlikely to be explained by other factors. While smaller associations can still be real, stronger ones inspire greater confidence in causation.
Dose Response Relationship. More Exposure, More Risk
If increasing exposure leads to a higher likelihood of the outcome, the evidence for causation becomes stronger. This is often called a dose response effect. For example, in studies of alcohol consumption, the risk of liver damage rises with the amount of alcohol consumed. Light drinkers may have a small risk, while heavy drinkers face a much greater one.
This pattern is not only found in lifestyle factors but also in environmental exposures. If rising levels of fine particulate matter in the air are linked to higher rates of respiratory disease, and cleaner air leads to lower rates, the case for a causal connection strengthens. However, researchers also acknowledge that some relationships have thresholds. In certain situations, a harmful effect only appears after a certain level of exposure.
Consistency. The Relationship Appears in Different Studies
One study alone rarely proves causation. If a relationship is real, it should appear across different settings, using different methods, and in different populations. This is why replication is a core part of science. When independent researchers find the same link, the probability that the result is due to chance or bias decreases.
The smoking and lung cancer story is a classic example. From North America to Europe to Asia, studies over decades have found similar results. Even when the research designs varied, the association persisted. Consistency across geography, culture, and method builds a more convincing case than a single impressive finding.
Plausibility. Does It Make Biological Sense?
A claim is more convincing when it fits with what we know about how the body works. For example, it is biologically plausible that ultraviolet radiation from sunlight can cause skin cancer because radiation can damage DNA in skin cells. Similarly, it is plausible that a high salt diet contributes to high blood pressure, as salt influences fluid balance and blood vessel function.
However, plausibility has its limits. Sometimes new discoveries change what we thought we knew. When two Australian researchers proposed that a bacterium, Helicobacter pylori, could cause stomach ulcers, the idea seemed implausible at the time. Yet they proved it through research, eventually winning the Nobel Prize. Plausibility helps guide judgment, but science must remain open to surprises.
Considering Alternative Explanations. Ruling Out Other Causes
Before concluding that a factor causes an outcome, researchers must rule out other possible explanations. This means looking carefully for confounding variables, which are other factors that might explain the association. If a study finds that people who drink green tea live longer, is it the tea itself, or is it that tea drinkers also tend to exercise more and eat healthier diets?
Good research uses statistical adjustments and careful study design to minimize these influences. In some cases, additional studies are needed to clarify whether the observed link is genuine or the result of another hidden factor. Without this step, the risk of jumping to false conclusions is high.
Experiment. Changing the Factor Changes the Outcome
Experimental evidence is among the strongest forms of support for causation. If changing or removing the suspected cause alters the outcome, the relationship gains credibility. Clinical trials are a direct example, where researchers assign participants to receive or not receive an intervention and observe the results.
Sometimes experiments occur naturally. For example, when a city removes lead from its water supply, researchers can compare health outcomes before and after the change. If cases of lead poisoning drop significantly, this strengthens the argument that lead in water was the cause. While experiments are not always possible or ethical, when they are, they can provide powerful proof.
Specificity. One Cause, One Effect
Specificity refers to a situation where a single cause leads to a single effect. This is rare in public health because most diseases have multiple causes. For example, heart disease can result from a mix of diet, exercise habits, genetics, and other factors. Nevertheless, when a specific cause produces a specific effect, it adds clarity. The bacterium Mycobacterium tuberculosis causing tuberculosis is a clear example of specificity.
Even though this criterion is considered the weakest, it still has value. It helps narrow down the focus of research and can strengthen the case when present alongside other criteria.
Coherence. Fits with Existing Knowledge
Coherence means that the proposed causal relationship does not conflict with the bulk of existing scientific knowledge. If a new finding aligns with well established principles, it is easier to accept. If it contradicts them, there must be very strong evidence to support it.
For example, the idea that a virus could cause certain cancers might have seemed strange decades ago, but research has shown that human papillomavirus can lead to cervical cancer. Over time, this new understanding became coherent with broader knowledge of disease mechanisms. Science evolves, but coherence ensures that new claims are evaluated within a logical framework.
Bringing the Criteria Together in Practice
The Bradford Hill criteria are not a simple checklist where all boxes must be ticked. Instead, they form a flexible framework for evaluating whether the evidence suggests a real causal relationship. Some criteria, like temporal relationship, are essential. Others, like specificity, may be absent in many valid causal links.
In public health, these principles guide decisions that can save lives. They help determine whether to issue warnings, launch prevention programs, or invest in further research. For example, they have been used to link asbestos exposure to lung disease, secondhand smoke to respiratory illness, and high cholesterol to heart disease. By applying these criteria, scientists ensure that health recommendations are grounded in the best available evidence.