Statistical Analysis for Education and Psychology Researchers

(Jeff_L) #1

indeed returned blank questionnaires for certain individuals. It is therefore sensible to
spend time at the planning stage considering strategies to ensure complete responses and
subsequently to complete a pilot study.
If there are problems with the specific method that generates the data, such as,
ambiguous questions, then the data will not be valid. That is, the data will not be
trustworthy because we have not measured what we think we have measured. In this case
the questionnaire is said to have poor construct validity. Messick (1989) suggests that
construct validity encompasses three other forms of validity often referred to in the
measurement literature as content, concurrent and predictive validity. A questionnaire
survey that has dubious construct validity is also likely to yield erroneous conclusions
about differences that appear in the sample data. Researchers refer to the issue of drawing
false conclusions from statistical tests of differences or relationships as a problem of
statistical conclusion validity. Cook and Campbell (1979) suggest it is appropriate to
establish whether differences or relationships exist before considering the magnitude or
strength of any effects. Another aspect of statistical conclusion validity is the reliability
of measurement, the idea that consistent results will be given by a measurement
instrument when a subject is measured repeatedly under near identical conditions. Lack
of reliability increases the amount of observed variation which has the effect of making it
more difficult to detect significant covariation among variables. Larger sample sizes can,
to some extent, compensate for this increase in variability of measures. However, as
Henry, (1990) comments, ‘to compensate for the inflation of the variance [variability of
observations] due to the lack of reliability of the instrument, it must be recognized and
accounted for early in the design process’ p. 13.


Procedures for data collection

Data generated in a quantitative investigation should be the product of a research design,
which is a plan specifying what information will be collected, how this will be done and
how it should be analyzed. Quantitative studies such as surveys and experiments, if
systematically planned, should make use of the idea of chance when data is collected
because the role that chance plays in data generation influences the trustworthiness of any
statements we make about research findings. For example, chance is involved when
selecting subjects for a survey or allocating subjects to an experimental group. If data are
collected in a systematic rather than in a haphazard way then knowing the role that
chance plays in generating the data allows valid conclusions to be drawn about your
results—or the results of others.
A random sampling procedure is often used in survey design. This means choosing
subjects at random from a defined population. When random sampling is used each
member of the target population has a known chance of being selected for the sample. In
experimental research the principle of randomization is used as a means of assigning
subjects to treatment groups on the basis of chance. Random assignment, which should
not be confused with random sampling, is intended to produce experimental groups that
are similar in all respects prior to any treatment. The randomization process, which does
not mean a haphazard one, uses the laws of chance to assign subjects to treatment groups
in a way which eliminates any systematic differences that might exist among subjects.


Statistics and research design 3
Free download pdf