Evidence-Based Practice for Nurses

(Ben Green) #1

Multiple Regression


A change in one variable is usually the result of many factors. Thus, when
researchers want to study the relationship of many independent variables on
one dependent variable, they use multiple regression analysis (Hayes, 1994;
Plichta & Kelvin, 2013). Calculations such as these have become much more
sophisticated with the use of computers because computers can perform
simultaneous calculations. Like Pearson’s r, multiple regression is used when
variables are measured at the interval or ratio level. For example, suppose a
researcher wants to determine which factors best predict an anorexic adolescent’s
success at maintaining a weight in the normal range. Independent variables
might include self-esteem, social support, anxiety, and locus of control. In this
situation, a multiple regression will be performed.


There are different approaches to performing multiple regression based on
the way in which the predictor variables are entered into the analysis. One
common approach to multiple regression is known as step-wise. This approach
is used to find the smallest number of independent variables that account for
the greatest proportion of variance in the outcome variable (Pedhazur, 1982).
For example, in a study of adolescents with anorexia, a researcher might find
that self-esteem, anxiety, and locus of control account for 24% of the variance
in weight gain and that social support does not make any significant difference
in weight gain. Researchers can use another approach to multiple regression
know as hierarchical regression. This approach is typically used when the
importance of variables has been specified in theories. For example, suppose
it is proposed in a theory about anorexia in adolescents that locus of control is
the most important factor, followed by self-esteem and then anxiety. Based on
this theory, the researcher would be able to specify the order the independent
variables are to be entered into the equations. As in step-wise multiple regres-
sion, the amount of variance that is significant is reported.


Other Tests of Significance


Because all variables are not measured at interval and ratio levels, there are
other tests of significance that can determine whether changes in variables
are significant (Hayes, 1994; Plichta & Kelvin, 2013). When nominal data are
involved, statistics such as phi coefficients, point biserials, and contingency
coefficients are reported. Researchers use Kendall’s Tau, Spearman Rho, and
discriminate function analysis to analyze ordinal-level data. There are also very
sophisticated methods for testing and predicting the strength and direction of
relationships among multiple variables. These analytic methods, such as linear
structural relationships or structural equation modeling, are useful ways to use
data to test theories.


KEY TERM
multiple
regression:
Inferential statistical
test that describes
the relationship
of three or more
variables

13.8 Using Statistical Tests to Make Inferences About Populations 369
Free download pdf