Erim Hester Duursema[hr].pdf

(Jeff_L) #1

when undertaking a factor analysis using EFA, this study made use of a traditional statistical computer
package (e.g. SPSS) for the statistical analyses (Pett et al., 2003). A Varimax rotation with Kaiser
Normalization was used to identify the items with strong and unique loadings to components
identified in the analysis (Anderson et al., 1995). Simple statistics (means and standard deviations)
were calculated for all subscales. The subscales were assessed for their internal reliability using the
SPSS scale internal reliability (alpha) function. Lastly, the scales were correlated in order to
demonstrate potential strong relationships between the identified strategic leadership dimensions.


6.3.1 FACTORIAL STRUCTURE
A first step consisted of testing the data on criteria to conduct a factor analysis. For instance, the
number of subjects needed to undertake a factor analysis depends on the number of items that are
initially included. There is, very little agreement among the authorities on factor analysis regarding
acceptable sample size. A frequently used criteria is the relative number indicated by the subjects-to-
variables ratio. Recommendations for the subject-to-variable ratio range from 2:1 to 10:1. The
generally accepted minimum for reaching a stable factor structure is 5:1 (Ferguson & Cox, 1993). The
validation of this instrument containing 16 items was performed by using data from 513 individuals
about 80 managers. Hence the subject-to-variable ratio (32:1) for this study largely surpassed the most
stringent criteria of 10:1


The distribution of the 16 items was checked for range restriction and multivariate normality. The
descriptive statistics confirmed a normal distribution. Multivariate normality implies that all of the
items being considered and the linear combinations of those items are normally distributed, which is
an assumption underlying exploratory factor analysis (Pett et al., 2003). The correlation matrix was
examined in order to identify items that were either too highly correlated (ȡ • RUQRWFRUUHODWHG ȡ
< .30) with one another. If items were too highly correlated, there was a problem of multicollinearity
and one or more of the highly correlated items needed to be dropped from the analysis. If the items
were not correlated strong enough, there was not much shared common variance, thus potentially
yielding as many factors as items. The correlation matrix was examined carefully and items that were
insufficiently correlated with any of the other items in the matrix (i.e. ȡ <.30) were to be eliminated.
Table 6.2 shows the correlation matrix, with the items with an ȡ < .30 colored grey and the items with
an ȡ •FRORUHGred. It can be concluded that there were no items too highly correlated with each
other (i.e. no red items). Nor was there an item that did not correlate with any of the other items
(meaning the whole row would be grey).

Free download pdf