Logistic regression
DescriptionLogistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (in which there are only two possible outcomes). In logistic regression, the dependent variable is binary or dichotomous, i.e. it only contains data coded as 1 (TRUE, success, pregnant, etc. ) or 0 (FALSE, failure, nonpregnant, etc.). The goal of logistic regression is to find the best fitting (yet biologically reasonable) model to describe the relationship between the dichotomous characteristic of interest (dependent variable = response or outcome variable) and a set of independent (predictor or explanatory) variables. Logistic regression generates the coefficients (and its standard errors and significance levels) of a formula to predict a logit transformation of the probability of presence of the characteristic of interest: where p is the probability of presence of the characteristic of interest. The logit transformation is defined as the logged odds: and Rather than choosing parameters that minimize the sum of squared errors (like in ordinary regression), estimation in logistic regression chooses parameters that maximize the likelihood of observing the sample values. Required inputDependent variableThe variable whose values you want to predict. The dependent variable must be binary or dichotomous, and should only contain data coded as 0 or 1. If your data are coded differently, you can use the Define status tool to recode your data. Independent variablesSelect the different variables that you expect to influence the dependent variable. Filter(Optionally) enter a data filter in order to include only a selected subgroup of cases in the analysis. Options
ResultsAfter you click the OK button, the following results are displayed: Sample size and cases with negative and positive outcomeFirst the program gives sample size and the number and proportion of cases with a negative (Y=0) and positive (Y=1) outcome. Overall model fitThe null model 2 Log Likelihood is given by 2 * ln(L_{0}) where L_{0} is the likelihood of obtaining the observations if the independent variables had no effect on the outcome. The full model 2 Log Likelihood is given by 2 * ln(L) where L is the likelihood of obtaining the observations with all independent variables incorporated in the model. The difference of these two yields a ChiSquared statistic which is a measure of how well the independent variables affect the outcome or dependent variable. If the Pvalue for the overall model fit statistic is less than the conventional 0.05 then there is evidence that at least one of the independent variables contributes to the prediction of the outcome. Cox & Snell R^{2} and Nagelkerke R^{2} are other goodness of fit measures known as pseudo Rsquareds. Note that Cox & Snell's pseudo Rsquared has a maximum value that is not 1. Nagelkerke R^{2} adjusts Cox & Snell's so that the range of possible values extends to 1. Regression coefficientsThe regression coefficients are the coefficients b_{0}, b_{1}, b_{2}, ... b_{k} of the regression equation: An independent variable with a regression coefficient not significantly different from 0 (P>0.05) can be removed from the regression model (press function key F7 to repeat the logistic regression procedure). If P<0.05 then the variable contributes significantly to the prediction of the outcome variable. The logistic regression coefficients show the change (increase when b_{i}>0, decrease when b_{i}<0) in the predicted logged odds of having the characteristic of interest for a oneunit change in the independent variables. When the independent variables X_{a} and X_{b} are dichotomous variables (e.g. Smoking, Sex) then the influence of these variables on the dependent variable can simply be compared by comparing their regression coefficients b_{a} and b_{b}. The Wald statistic is the regression coefficient divided by its standard error squared: (b/SE)^{2}. Odds ratios with 95% CIBy taking the exponential of both sides of the regression equation as given above, the equation can be rewritten as: It is clear that when a variable X_{i} increases by 1 unit, with all other factors remaining unchanged, then the odds will increase by a factor e^{bi}. This factor e^{bi} is the odds ratio (O.R.) for the independent variable X_{i} and it gives the relative amount by which the odds of the outcome increase (O.R. greater than 1) or decrease (O.R. less than 1) when the value of the independent variable is increased by 1 units. E.g. The variable SMOKING is coded as 0 (= no smoking) and 1 (= smoking), and the odds ratio for this variable is 3.2. This means that in the model the odds for a positive outcome in cases that do smoke are 3.2 times higher than in cases that do not smoke. Interpretation of the fitted equationThe logistic regression equation is: logit(p) = 4.48 + 0.11 x AGE + 1.16 x SMOKING So for 40 years old cases who do smoke logit(p) equals 1.08. Logit(p) can be backtransformed to p by the following formula: Alternatively, you can use the Logit table. For logit(p)=1.08 the probability p of having a positive outcome equals 0.75. Hosmer & Lemeshow testThe HosmerLemeshow test is a statistical test for goodness of fit for the logistic regression model. The data are divided into approximately ten groups defined by increasing order of estimated risk. The observed and expected number of cases in each group is calculated and a Chisquared statistic is calculated as follows: with O_{g}, E_{g} and n_{g} the observed events, expected events and number of observations for the g^{th} risk decile group, and n the number of groups. The test statistic follows a Chisquared distribution with n2 degrees of freedom. A large value of Chisquared (with small pvalue < 0.05) indicates poor fit and small Chisquared values (with larger pvalue closer to 1) indicate a good logistic regression model fit. The Contingency Table for Hosmer and Lemeshow Test table shows the details of the test with observed and expected number of cases in each group. Classification tableTThe classification table is another method to evaluate the predictive accuracy of the logistic regression model. In this table the observed values for the dependent outcome and the predicted values (at a user defined cutoff value, for example p=0.50) are crossclassified. In our example, the model correctly predicts 70% of the cases. ROC curve analysisAnother method to evaluate the logistic regression model makes use of ROC curve analysis. In this analysis, the power of the model's predicted values to discriminate between positive and negative cases is quantified by the Area under the ROC curve (AUC). The AUC, sometimes referred to as the cstatistic (or concordance index), is a value that varies from 0.5 (discriminating power not better than chance) to 1.0 (perfect discriminating power). To perform a full ROC curve analysis on the predicted probabilities you can save the predicted probabilities and next use this new variable in ROC curve analysis. The Dependent variable used in Logistic Regression then acts as the Classification variable in the ROC curve analysis dialog box. Sample size considerationsSample size calculation for logistic regression is a complex problem, but based on the work of Peduzzi et al. (1996) the following guideline for a minimum number of cases to include in your study can be suggested. Let p be the smallest of the proportions of negative or positive cases in the population and k the number of covariates (the number of independent variables), then the minimum number of cases to include is: N = 10 k / p For example: you have 3 covariates to include in the model and the proportion of positive cases in the population is 0.20 (20%). The minimum number of cases required is N = 10 x 3 / 0.20 = 150 If the resulting number is less than 100 you should increase it to 100 as suggested by Long (1997). References
Book recommendation
See alsoExternal links
