Skip to main content
Mail a PDF copy of this page to:
(Your email address will not be added to a mailing list)
Show menu

Multiple regression

Next selectRegression
Next selectMultiple regression


Multiple regression is a statistical method used to examine the relationship between one dependent variable Y and one or more independent variables Xi. The regression parameters or coefficients bi in the regression equation

Multiple regression equation$$ Y = b_0 + b_1 X_1 + b_2 X_2 + b_3 X_3 + ... + b_k X_k $$

are estimated using the method of least squares. In this method, the sum of squared residuals between the regression plane and the observed values of the dependent variable are minimized. The regression equation represents a (hyper)plane in a k+1 dimensional space in which k is the number of independent variables X1, X2, X3, ... Xk, plus one dimension for the dependent variable Y.

Required input

The following need to be entered in the Multiple regression dialog box:

Multiple regression - dialog box

Dependent variable

The variable whose values you want to predict.

Independent variables

Select at least one variable you expect to influence or predict the value of the dependent variable. Also called predictor variables or explanatory variables.


Optionally select a variable containing relative weights that should be given to each observation (for weighted multiple least-squares regression). Select the dummy variable "*** AutoWeight 1/SD^2 ***" for an automatic weighted regression procedure to correct for heteroscedasticity (Neter et al., 1996). This dummy variable appears as the first item in the drop-down list for Weights.


Optionally enter a data filter in order to include only a selected subgroup of cases in the analysis.



After clicking OK the following results are displayed in the results window:

Multiple regression - results

In the results window, the following statistics are displayed:

Sample size: the number of data records n

Coefficient of determination R2: this is the proportion of the variation in the dependent variable explained by the regression model, and is a measure of the goodness of fit of the model. It can range from 0 to 1, and is calculated as follows:

Coefficient of determination$$ R^2 = \frac {explained\ variation} {total\ variation} = \frac {\sum_{}^{}{(Y_{est}-\bar{Y})^2}} {\sum_{}^{}{(Y-\bar{Y})^2}} $$

where Y are the observed values for the dependent variable, $\bar{Y}$ is the average of the observed values and Yest are predicted values for the dependent variable (the predicted values are calculated using the regression equation).

R2-adjusted: this is the coefficient of determination adjusted for the number of independent variables in the regression model. Unlike the coefficient of determination, R2-adjusted may decrease if variables are entered in the model that do not add significantly to the model fit.

R-squared - adjusted$$ R^2_{adj} = 1 - \frac {unexplained\ variation / (n-k-1)} {total\ variation / (n-1) } $$


R-squared - adjusted$$ R^2_{adj} = 1 - \frac {\sum_{}^{}{(Y_{est}-\bar{Y})^2}} {\sum_{}^{}{(Y-\bar{Y})^2}} \frac {n-1}{n-k-1} $$

Multiple correlation coefficient: this coefficient is a measure of how tightly the data points cluster around the regression plane, and is calculated by taking the square root of the coefficient of determination.

When discussing multiple regression analysis results, generally the coefficient of multiple determination is used rather than the multiple correlation coefficient.

Residual standard deviation: the standard deviation of the residuals (residuals = differences between observed and predicted values). It is calculated as follows:

Residual standard deviation$$s_{res} = \sqrt{\frac{\sum_{}^{}{(Y-Y_{est})^2}}{n-k-1}} $$

The regression equation: the different regression coefficients bi with standard error sbi, t-value, P-value, partial and semipartial correlation coefficients rpartial and rsemipartial.

Variables not included in the model: variables are not included in the model because of 2 possible reasons:

Analysis of variance: the analysis of variance table divides the total variation in the dependent variable into two components, one which can be attributed to the regression model (labeled Regression) and one which cannot (labeled Residual). If the significance level for the F-test is small (less than 0.05), then the hypothesis that there is no (linear) relationship can be rejected, and the multiple correlation coefficient can be called statistically significant.

Zero-order and simple correlation coefficients: this optional table shows the correlation coefficients between the dependent variable (Y) and all independent variables Xi separately, and between all independent variables.

Analysis of residuals

Multiple linear regression analysis assumes that the residuals (the differences between the observations and the estimated values) follow a Normal distribution. This assumption can be evaluated with a formal test, or by means of graphical methods.

The different formal Tests for Normal distribution may not have enough power to detect deviation from the Normal distribution when sample size is small. On the other hand, when sample size is large, the requirement of a Normal distribution is less stringent because of the central limit theorem.

Therefore, it is often preferred to visually evaluate the symmetry and peakedness of the distribution of the residuals using the Histogram, Box-and-whisker plot, or Normal plot.

To do so, you click the hyperlink "Save residuals" in the results window. This will save the residual values as a new variable in the spreadsheet. You can then use this new variable in the different distribution plots.

Repeat procedure

If you want to repeat the Multiple regression procedure, possibly to add or remove variables in the model, then you only have to press function key F7. The dialog box will re-appear with the previous entries and selections (see Recall dialog).


See also

Recommended book

Book cover

Statistical Methods in Medical Research
Peter Armitage, Geoffrey Berry, J. N. S. Matthews

Buy from Amazon

Although more comprehensive and mathematical than the books by Douglas Altman and Martin Bland, "Statistical Methods in Medical Research" presents statistical techniques frequently used in medical research in an understandable format.