Power and sample size calculation

Christian Baghai
3 min readFeb 10, 2021

--

Power and sample size calculation determine how many subject do we have to recruit to our study in order to achieve our objective.

In the majority of studies we find ourselves in a situation where we have two hypotheses. A null hypothesis and an alternative hypothesis. The null hypothesis represents the status quo. We can accept it or reject it. In a clinical trial if we reject the null hypothesis this means that the new compound is effective. However, it is possible to reject the null hypothesis and to be wrong about it. In other words this result can be obtained by chance.

There is two ways to prevent such a thing from happening.

1.Eliminating bias in the study design using techniques such as randomisation, blinding, etc.

2. Increase the number of subjects studied.

However, the more subject we recruit in our study the more expensive it becomes.

A quick look at the concept of power

Power and sample size estimations are used to determine how many subjects are needed to answer the research question.

In trying to determine whether the two groups are the same (accepting the null hypothesis) or they are different ( rejecting the null hypothesis) we can make two kinds of error. You can make a type I error or a type II error.

A type I error occurre when we reject the null hypothesis incorrectly.

A type II error occur when we accept the null hypothesis incorrectly.

Power calculations tell us how many patients are required in order to avoid a type I or a type II error.

The term power is commonly used with reference to all sample size estimations. Power refers more specificly to the number of patients required to avoid a type II error . This applies in a comparative study.

What factor can change the power of a study?

There are several factors that can affect the power of a study. There are some factors that we can control and some others that we cannot control.

For any given result from a sample we can determine a probability distribution around that value. The best known example of this would be 95% confidence intervals.

The size of the confidence interval is inversely proportional to the number of subjects studied.

So the more people we study the more precise we can be about our result.

A study with a small sample size will have a large confidence interval and will only show a significant difference between the two samples only if there is indeed a large difference between the two groups.

Also, If we are trying to detect very small differences between the two groups then very precise estimates of the true population value are required. This is due to the fact that we need to find the true population value very precisely for each treatment group. On the other hand, if we find, or are looking for, a large difference a fairly wide probability distribution may be acceptable.

So, if we are looking for a big difference between the two groups we can accept a wide probability distribution.

If we want to detect a small difference then we need great precision and small probability distribution.

--

--

Christian Baghai
Christian Baghai

No responses yet