Skip to main content
MedCalc
Mail a PDF copy of this page to:
(Your email address will not be added to a mailing list)  # Sample size calculation: Introduction

In the Sample size menu, you can calculate the required sample size for some common problems, taking into account the magnitude of differences and the probability to make a correct or a false conclusion.

When you perform a statistical test, you will make a correct decision when you

• reject a false null hypothesis, or
• accept a true null hypothesis.

On the other hand you can make two errors:

• you can reject a true null hypothesis, or
• you can accept a false null hypothesis.

These four situations are represented in the following table.

Null hypothesis = TRUE Null hypothesis = FALSE
Reject null hypothesis Type I error
α
Correct decision
Accept null hypothesis Correct decision Type II error
β

For example, when you have rejected the null hypothesis in a statistical test (because P<0.05), and therefore conclude that a difference between samples exists, you can either:

• have done so correctly, and uncovered a difference where one exists;
• have rejected the null hypothesis when in fact it is true, and uncovered a difference where in fact none exits. In this case you make a Type I error. α is the (two-sided) probability of making a Type I error.

Type I error = rejecting the null hypothesis when it is true

You can avoid making a Type I error by selecting a lower significance level of the test, e.g. by rejecting the null hypothesis when P<0.01 instead of P<0.05.

On the other hand, when you accept the null hypothesis in a statistical test (because P>0.05), and conclude that there is no difference between samples, you can either:

• have correctly concluded that there is no difference;
• have accepted the null hypothesis when in fact it is false, and therefore you have failed to uncover a difference where such a difference really exists. In this case you make a Type II error. β is the probability of making a Type II error.

Type II error = accepting the null hypothesis when it is false

The power of a test is 1-β, this is the probability to uncover a difference when there really is one. For example when β is 0.10, then the power of the test is 0.90 or 90%.

Power = probability to achieve statistical significance

You can avoid making a Type II error, and increase the power of the test to uncover a difference when there really is one, mainly by increasing the sample size.

To calculate the required sample size, you must decide beforehand on:

• the required probability α of a Type I error, i.e. the required significance level (two-sided);
• the required probability β of a Type II error, i.e. the required power 1-β of the test;
• a quantification of the study objectives, i.e. decide what difference is biologically or clinically meaningful and worthwhile detecting (Neely et al., 2007).

In addition, you will sometimes need to have an idea about expected sample statistics such as e.g. the standard deviation. This can be known from previous studies.

## Sample Size Tables for Clinical StudiesDavid Machin, Michael J. Campbell, Say-Beng Tan, Sze-Huey Tan

Buy from Amazon US - CA - UK - DE - FR - ES - IT

Sample Sizes for Clinical, Laboratory and Epidemiology Studies includes the sample size software (SSS) and formulae and numerical tables needed to design valid clinical studies. The text covers clinical as well as laboratory and epidemiology studies and contains the information needed to ensure a study will form a valid contribution to medical research.

The authors, noted experts in the field, explain step by step and explore the wide range of considerations necessary to assist investigational teams when deriving an appropriate sample size for their when planned study. The book contains sets of sample size tables with companion explanations and clear worked out examples based on real data. In addition, the text offers bibliography and references sections that are designed to be helpful with guidance on the principles discussed.