Clinical Psychology

(Kiana) #1

thing? That is, are the items highly correlated with
each other? The preferred method of assessing internal
consistency reliability involves computing the average
of all possible split-half correlations for a given test
(Cronbach’salpha).
Another aspect of reliability,interraterorinter-
judge reliability, was discussed in the previous chapter
in the context of interview assessment. The goal
here is to demonstrate that independent observers
can agree about their ratings or judgments of some
particular aspect of the person’s behavior.
Regardless of the kind of reliability in ques-
tion, the goal is to demonstrate consistency in the
data. A test must be able to provide evidence that
the scores it yields are consistent over time and
over examiners or are otherwise reliable (as in


split-half or equivalent-forms reliability). Without
reliability, consistency, or stability of measure-
ment, a test cannot be valid. However, even
though a test shows reliability, this does not auto-
matically imply validity. For example, a test
involving the ability to discriminate among colors
may produce scores that are highly reliable over
time, yet not be a valid measure of intelligence.

Validity. In general,validityrefers to the extent to
which an assessment technique measures what it is
supposed to measure. Like reliability, there are sev-
eral forms of validity.Content validityindicates the
degree to which a group of test items actually cov-
ers the various aspects of the variable under study.
For example, a test that purported to measure

T A B L E 7-1 Common Types of Reliability That Are Assessed to Evaluate Psychological Tests


Type of Reliability Definition Statistical Index


Test–retest reliability Index of the consistency of test scores across some period
of time


Pearson’sr
Intraclass correlation

Equivalent forms reliability Index of the consistency of test scores across time; not
vulnerable to a“practice effect”(not repeating the same
test)


Pearson’sr

Split-half reliability Index of the internal consistency of the test (do the items
seem to be measuring the same variable or construct?)


Pearson’sr

Internal consistency reliability Preferred index of internal consistency, in which the
average of all possible split-half correlations is computed


Cronbach’s alpha
Kuder-Richardson-20

Interrater or interjudge reliability Index of the degree of agreement between two or more
raters or judges as to the level of a trait that is present or
the presence/absence of a feature or diagnosis


Pearson’sr
Intraclass correlation
Kappa

T A B L E 7-2 Common Types of Validity That Are Assessed to Evaluate Psychological Tests


Type of Validity Definition


Content validity The degree to which the test items adequately measure the various aspects of the variable or
construct.


Predictive validity The degree to which test scores can predict (correlate with) behavior or test scores that are
observed or obtained at some point in the future.


Concurrent validity The extent to which test scores are correlated with a related, but independent, set of test
scores or behaviors.


Construct validity The extent to which test scores are correlated with other measures or behaviors in a logical
and theoretically consistent way. This will involve a demonstration of both convergent and
discriminant validity.


NOTE: Predictive and concurrent validity are both subtypes of criterion-related validity.


196 CHAPTER 7

Free download pdf