Skip to main content
MedCalc
Mail a PDF copy of this page to:
(Your email address will not be added to a mailing list)
working
Show menu Show menu

Sample size calculation: Introduction

In the Sample size menu, you can calculate the required sample size for some common problems, taking into account the magnitude of differences and the probability to make a correct or a false conclusion.

When you perform a statistical test, you will make a correct decision when you

  • reject a false null hypothesis, or
  • accept a true null hypothesis.

On the other hand you can make two errors:

  • you can reject a true null hypothesis, or
  • you can accept a false null hypothesis.

These four situations are represented in the following table.

  Null hypothesis = TRUE Null hypothesis = FALSE
Reject null hypothesis Type I error
α
Correct decision
Accept null hypothesis Correct decision Type II error
β

For example, when you have rejected the null hypothesis in a statistical test (because P<0.05), and therefore conclude that a difference between samples exists, you can either:

  • have done so correctly, and uncovered a difference where one exists;
  • have rejected the null hypothesis when in fact it is true, and uncovered a difference where in fact none exits. In this case you make a Type I error. α is the (two-sided) probability of making a Type I error.

Type I error = rejecting the null hypothesis when it is true

You can avoid making a Type I error by selecting a lower significance level of the test, e.g. by rejecting the null hypothesis when P<0.01 instead of P<0.05.

On the other hand, when you accept the null hypothesis in a statistical test (because P>0.05), and conclude that there is no difference between samples, you can either:

  • have correctly concluded that there is no difference;
  • have accepted the null hypothesis when in fact it is false, and therefore you have failed to uncover a difference where such a difference really exists. In this case you make a Type II error. β is the probability of making a Type II error.

Type II error = accepting the null hypothesis when it is false

The power of a test is 1-β, this is the probability to uncover a difference when there really is one. For example when β is 0.10, then the power of the test is 0.90 or 90%.

Power = probability to achieve statistical significance

You can avoid making a Type II error, and increase the power of the test to uncover a difference when there really is one, mainly by increasing the sample size.

To calculate the required sample size, you must decide beforehand on:

  • the required probability α of a Type I error, i.e. the required significance level (two-sided);
  • the required probability β of a Type II error, i.e. the required power 1-β of the test;
  • a quantification of the study objectives, i.e. decide what difference is biologically or clinically meaningful and worthwhile detecting (Neely et al., 2007).

In addition, you will sometimes need to have an idea about expected sample statistics such as e.g. the standard deviation. This can be known from previous studies.

Literature

  • Machin D, Campbell MJ, Tan SB, Tan SH (2009) Sample size tables for clinical studies. 3rd ed. Chichester: Wiley-Blackwell.
  • Neely JG, Karni RJ, Engel SH, Fraley PL, Nussenbaum B, Paniello RC (2007) Practical guides to understanding sample size and minimal clinically important difference (MCID). Otolaryngology - Head and Neck Surgery, 143:29-36. PubMed

See also

External links