When you work with biostatistics, you are not just crunching numbers. You are applying a way of thinking about evidence. Over time, different schools of thought have developed. These approaches are called modes of inference, and they are not locked in competition. In fact, methods from one school can often be interpreted in a meaningful way through another.
Today, two main paradigms dominate: Bayesian inference and frequentist inference. Both are widely used, and each offers its own perspective on how to handle uncertainty in data. Understanding them is essential if you want to read results critically or design your own analysis.
Bayesian Inference. Updating Beliefs with Evidence
Bayesian inference is built on Bayes’s theorem, a mathematical formula that helps you update your degree of belief in a proposition when new evidence appears. Instead of thinking in terms of absolute certainty, you start with a prior belief, then adjust it based on the strength of the data you collect.
The core idea is that probabilities can represent degrees of belief.
For example, suppose you are testing whether a new drug is effective. You begin with a prior probability based on earlier studies. As your trial produces results, you update that probability to reflect the new evidence. This process makes Bayesian methods appealing in situations where evidence builds over time, such as disease surveillance or adaptive clinical trials.
A common use of Bayesian biostatistics is hypothesis testing, where you directly calculate the probability that a hypothesis is true given the data. For you as a decision‑maker, this can feel more intuitive than the binary accept‑or‑reject conclusions of other methods.
Frequentist Inference. Long‑Run Frequencies and Probability
Frequentist inference takes a different route. It treats probability as the long‑run frequency of an event occurring in repeated experiments. In this approach, you do not assign a probability to a hypothesis being true. Instead, you ask whether your observed data would be unusual if the hypothesis were true.
One of the main tools here is the significance test. You calculate a test statistic from your sample, compare it to a reference distribution, and decide whether your result is extreme enough to reject the null hypothesis. Another frequentist tool is the confidence interval. This is a range of values computed from your sample that would contain the true value a certain percentage of the time if the study were repeated many times.
For example, if your sample produces a 95 percent confidence interval for blood pressure reduction, you can say that in the long run, intervals calculated in the same way would capture the true value 95 percent of the time. While this interpretation can feel abstract, it remains a cornerstone of traditional statistical reporting.
Choosing Between Bayesian and Frequentist Approaches
You might wonder which method is better. The truth is, it depends on your question and your audience.
- Bayesian methods shine when you have prior knowledge you want to incorporate and when you need probabilities that speak directly to belief or decision‑making.
- Frequentist methods remain powerful when you want results that align with established regulatory or academic standards, where p‑values and confidence intervals are the expected outputs.
In practice, you will often see both approaches applied to the same data. When interpreted carefully, each can offer insights that the other might miss.
Levels of Measurement in Biostatistics
Beyond inference methods, you also need to understand the type of data you are working with. Not all measurements are created equal, and knowing the level of measurement will guide how you analyze them.
Biostatistics uses four main levels: ratio, interval, ordinal, and nominal. Each one offers a different degree of mathematical flexibility and analytical potential.
Ratio Measurements. Absolute Zero and Full Mathematical Power
Ratio measurements have an absolute zero point. This means that a value of zero represents the complete absence of the quantity being measured. You can compare differences, calculate averages, and even compute ratios between values.
Examples include:
- Height in centimeters
- Weight in kilograms
- Blood pressure in millimeters of mercury (when considering absolute values)
With ratio data, you can say that one value is twice as large as another. If one patient weighs 80 kilograms and another weighs 40, you can confidently state the first is twice as heavy. This level of measurement offers the most flexibility for statistical analysis.
Interval Measurements. Meaningful Differences but No True Zero
Interval measurements also have meaningful differences between values, but the zero point is arbitrary. This means you can add and subtract values, but you cannot make direct ratio statements.
The most familiar example is temperature:
- In Celsius or Fahrenheit, the distance between 20 and 30 degrees is the same as between 30 and 40 degrees.
- However, 40 degrees is not “twice as hot” as 20 degrees because zero is not the absence of heat.
Interval data supports many statistical techniques, but you must be careful when interpreting ratios or percentages.
Ordinal Measurements. Ordered but Uneven Steps
Ordinal measurements provide a ranking of values but do not guarantee equal spacing between them. You can tell which is greater or lesser, but not by how much.
Examples include:
- Pain scales from “no pain” to “severe pain”
- Tumor grading in pathology reports
- Likert scale responses such as “strongly disagree” to “strongly agree”
With ordinal data, median and percentile measures make sense, but means and standard deviations may not reflect meaningful differences. Treating ordinal data as if it were interval can lead to misleading conclusions.
Nominal Measurements. Categories Without Order
Nominal measurements are purely categorical. They place data into groups that have no inherent ranking.
Examples include:
- Blood type (A, B, AB, O)
- Type of infection (viral, bacterial, fungal)
- Geographic region of residence
For nominal data, you use counts, proportions, and chi‑square tests rather than arithmetic operations. Any attempt to “order” these values would be artificial.
Categorical vs Quantitative Variables
Sometimes you will see nominal and ordinal variables grouped together as categorical variables. They describe qualities or groupings rather than measurable quantities.
Ratio and interval variables, by contrast, are considered quantitative. They can be discrete, such as the number of hospital visits in a year, or continuous, such as blood cholesterol level. Knowing which type you have is essential for selecting the right statistical methods and avoiding errors in interpretation.