Kevin J. Apple et al.
80
from 0 to 100. The bars on the left represent the mean (±SE) test score for independent
groups of psychology majors who took the test before beginning classes or during their
midcareer assessment (summarizing independent group differences), and the bars on the
right represent scores for students who took the test both as incoming freshmen and as
midcareer students (summarizing dependent group differences). In both cases, the mid-
career psychology students performed better than the incoming psychology students. This
difference in performance was confirmed by an independent-samples t test for the inde-
pendent groups, t(65) = −2.49, p = .015, d = .68, but for the smaller subset of students in
which we could link scores over time a dependent-samples t test failed to confirm a statis-
tically significance difference for the repeated group, t(21) = −.99, p = .33, d = .25.
The significantly higher scores of midcareer students are consistent with an increase in
scientific reasoning among our psychology majors over their first two years at JMU.
However, there are several caveats. First, this difference was significant only for the inde-
pendent groups; data from the repeated group, which represents actual growth over time
for a group of students, failed to reach statistical significance (although this could be a
function of the small sample size, n = 22). Second, there are a number of reasons that
midcareer students may do better that are not specific to scientific reasoning skills (e.g.,
student maturation, number of courses completed, the loss from the university of those
students with the lowest aptitude for science and mathematics).
We did not confirm a statistically significant amount of improvement within the small
group of individuals who repeated the test. However, the fact that scores for the depen-
dent group changed in a positive direction and the fact that the scores for midcareer
students in the independent group were statistically higher than those of the freshmen is
support for the hypothesis that our psychology majors’ scientific reasoning does improve
over their first two years at JMU, and this effect ranges from small (d = .25) to moderate
(d = .68). Furthermore, to the extent that scientific reasoning is related to critical thinking,
these data support the assertion that our students’ critical thinking skills improve over
that time.
Like the data from the CTCT, faculty who teach courses in Cluster Three of the general
education program receive data from the NW assessment, and they have used this infor-
mation to make changes in coursework to better address students’ needs in this area.
Additionally, faculty have continued to improve the assessment instrument used to assess
scientific reasoning; faculty currently administer version 8 of the NW test.
Overall, the current data are encouraging, and we view these assessments from our
general education program as informative. Critical thinking is not the province of psy-
chology alone, and to the extent that other sectors of our university curriculum address
these issues, using assessments of those experiences can provide us meaningful data. By
looking beyond our specific psychology curriculum we get a more complete picture of
our psychology students’ KSAs. Furthermore, it may be possible to use these more gen-
eral tests to examine specific components of the psychology curriculum. For example, the
NW test may be an appropriate tool for assessing our students’ skills before and after
completing our statistics and research methods course sequence. Thus, although designed
to assess more general skills, these instruments may have utility that is specific to the
psychology major.