Skip to main content
Mail a PDF copy of this page to:
(Your email address will not be added to a mailing list)
Show menu Show menu

Method comparison: Comparison of multiple methods


Comparison of multiple methods is an extension of the Bland-Altman plot (Bland & Altman, 1986 and 1999) for more than two methods. For each method, the differences with a reference method are plotted against the values of this reference method (Krouwer, 2008).

The procedure produces multiple bias plots in one single display with all axes aligned to facilitate comparison of the different methods.

If duplicate or multiple measurements (with two methods) were performed per subject, you should use Bland-Altman plot with multiple measurements per subject.

Required input

Dialog box for multiple method comparison

Select the variables for the methods you want to compare.


  • First method is the reference method: the measurements of first selected method are the reference values (informational, this option is fixed).
  • Plot differences or ratios
    • Plot differences: Differences are calculated as measurement‑reference (*) so a positive difference is an overestimation and a negative difference is an underestimation.
    • Plot differences as %: With this option the differences will be expressed as percentages of the values on the axis (i.e. proportionally to the magnitude of measurements). This option is useful when there is an increase in variability of the differences as the magnitude of the measurement increases.
    • Plot ratios: When this option is selected then the ratios of the measurements will be plotted instead of the differences (avoiding the need for logarithmic transformation). This option as well is useful when there is an increase in variability of the differences as the magnitude of the measurement increases. Ratios are calculated as measurement/reference (*) so a ratio > 1 indicates an overestimation and a ratio < 1 indicates an underestimation.
    (*) You can reverse this by selecting the option Reference-Variable.
  • Draw line of equality: useful for detecting a systematic difference.
  • 95% CI of mean difference (*): the 95% Confidence Interval of the mean difference illustrates the magnitude of the systematic difference. If the line of equality is not in the interval, there is a significant systematic difference.
  • 95% CI of limits of agreement: shows error bars representing the 95% confidence interval for both the upper and lower limits of agreement.
  • Draw regression line of differences (*): this regression line may help to detect a proportional difference. Optionally, you can select to show the 95% confidence interval of this regression line.
  • Click Subgroups if you want to identify subgroups in the plots. A new dialog box is displayed in which you can select a categorical variable. The graph will use different markers for the different categories in this variable. Note that a legend cannot be displayed in this plot. To identify the subgroups, double-click on one of the observations to see its identification in the spreadsheet.

(*) or ratios when this option was selected.


The results panel displays the following information:

Results for multiple method comparison

  • Identifier for the reference value: the variable for the reference method.
  • Systematic differences: n (sample size), mean, SD and 95% CI of the differences.
  • Limits of agreement: the lower and upper limits of agreement with 95% CI.
  • Parameters of the regression of the differences against the reference value: intercept and slope with 95% CI, and P-value for slope.
  • Absolute percentage error: the absolute percentage error (APE) is calculated as 100 x ABS[(yref)/ref] where y is the observation and ref is the reference value. MedCalc calculates the median APE (MdAPE) and the 95th percentile of the absolute percentage error. The 95th percentile APE is interpreted as follows: the percentage difference between a measurement and the reference value is not expected, with 95% certainty, to exceed this value. MedCalc also reports the 95% confidence intervals for both statistics, if sample size is large enough.


The graphical display consists of multiple frames with Bland & Altman plots using the selected options.

Multiple method comparison graph (Bland-Altman plots)

Unlike other MedCalc graphs, this graphical display has limited editing possibilities:

  • you cannot add reference lines or draw boxes, arrows or add text frames.
  • the scaling of the axes is the same in each frame.
  • the titles are the same for each frame. To create variations in the titles for the different frames, you can use the following symbolic fields:
    • !#!: this will insert the frame number in the title
    • !a!: this will insert a character A, B, ..., corresponding with the frame number, in the title
    • !var!: this will insert the variable name in the title

Confidence intervals

Optionally, confidence intervals may be displayed for the average difference and for the limits of agreement. These confidence intervals can be represented as error bars or horizontal lines. Right-click on the error bar to set formatting options.


  • Bland JM, Altman DG (1986) Statistical method for assessing agreement between two methods of clinical measurement. The Lancet i:307-310. PubMed
  • Bland JM, Altman DG (1995) Comparing methods of measurement: why plotting difference against standard method is misleading. The Lancet 346:1085-1087. PubMed
  • Bland JM, Altman DG (1999) Measuring agreement in method comparison studies. Statistical Methods in Medical Research 8:135-160. PubMed
  • Hanneman SK (2008) Design, analysis, and interpretation of method-comparison studies. AACN Advanced Critical Care 19:223-234. PubMed
  • Krouwer JS (2008) Why Bland-Altman plots should use X, not (Y+X)/2 when X is a reference method. Statistics in Medicine 27:778-780. PubMed

See also