Comp. by: VPugazhenthi Stage : Revises1 ChapterID: 9781405132879_4_S Date:1/4/
09 Time:15:23:35 Filepath:H:/00_Blackwell/00_3B2/Gregory-9781405132879/appln/3B2/
revises/9781405132879_4_S.3d
by setting demanding probability levels in
confirmatory data analysis. Type II errors
- that is, having insufficient information to be
able to detect a genuine effect – are managed
by conducting a power analysis during the
design phase. Conversely, collecting too much
information is not only wasteful of resources
but can be seen as an unethical intrusion.
Statistical power is increased when there is
little ‘noise’ in the system; the effect is sub-
stantial when probability levels are set leni-
ently and when the sample size is large, and
according to which statistical test is used
(parametric procedures being more powerful).
Consequently, researchers should choose
settings/contexts that maximize the ‘signal’ and
not as in a study of the effect of size of house
on price, sample areas where all the properties
are three-bedroomed ones. Power formulas
and software (such as GPower) are available
but require an estimate of the size of the effect.
Cohen (1988) has defined small, medium and
large effects as the ratio of an effect to variation
for a very large range of statistical procedures
(e.g. at-test in a multipleregressionmodel).
Thus, to be able to detect asmalldifference
of 0.25 of a standard deviation between two
sample means with 80 per cent power and 5
per cent significance (both these percentages
being the most commonly used conventions)
requires 2253 observations in the total
sample; while to be able to detect alargedif-
ference of 1.0 standard deviation requires only
2 *17 observations. Unfortunately, academic
research has paid too little attention to statis-
tical power with, for example, Sedlmeier and
Gigerenzer (1989) finding that even in a
reputed journal statistical power hovered around
50 per cent. If all these studies were replicated,
only half would result in an identifiable effect.
The problem is actually even more widespread
due to the use of non-probabilistic samples.
The way forward is to use simulationto
judge effectiveness and efficiency, as pioneered
by Snijders (1992) for snowball sampling.
Indeed, simulation is a general strategy that
permits great flexibility, not only allowing the
assessment of power as sample size increases
but also catering for missing data, non-
linearity, unequal variances and other specifi-
cations of an underlying model. kj
Suggested reading
Barnett (2002); Dixon and Leach (1977); Kish
(1995); Lenth (2001); Sudman (1976). G*Power
for power calculations is available from http://
http://www.psycho.uni-duesseldorf.de/aap/projects/
gpower/index.html
satisficing behaviour Behaviour that meets
an actor’s minimum criteria for success. The
concept was developed by Herbert Simon
(1916–2001: see Simon, 1956) as an alterna-
tive to the presumed optimization of
rational choice theory, in which actors
always seek to make the best-possible choice
- for example, to maximize profits. Satisficing
behaviour may be appropriate either when it is
impossible to calculate the maximal outcome
or when actors are unprepared to make the
investment necessary to identify and/or pursue
that outcome (as argued by Pred, 1967, 1969).
Actors will then set their own criteria that will
represent a satisfactory return on their invest-
ment. (See alsobehavioural geography.)rj
Suggested reading
Gigerenzer and Selten (2001).
scale ‘Scale’ has no single definition, and
in recent years has been the object of much
theorizing (Howitt, 2003). The traditional
definition incartographyrefers tomapreso-
lution. All maps represent the world by redu-
cing the size and diversity of its component
spaces for visual display, digitally or on paper.
Cartographic scale expresses the mathematical
relationship between the map and the Earth,
usually denoted as a representative fraction.
For example, the small fraction, 1:500,000,
indicates that one unit of distance on the
map represents 500,000 units of Earth space.
Such a map would show large expanses of
terrain – much more than say, the larger frac-
tion of 1:24,000. Hence the common confu-
sion between large-scale (or large-fraction)
maps that show less space but typically more
detail, andsmall-scalemaps that show more
space, but with less detail. Each type of social
and environmental diversity has its own ‘best
resolution’ in terms of cartographicrepresen-
tation; thus, the choice of cartographic scale
depends on the problem at hand. To visualize
US state variation in the enforcement of envir-
onmental policies, we need the familiar small-
scale map of the country with state borders,
while determining whether point source
pollutionreleased into a river caused down-
stream lymphomas requires a large-scale map.
The second major definition is operational
ormethodological. This is the scale or reso-
lution of data collection, with the familiar cas-
cade from micro (body) to macro (globe).
Tied to this type of scale are various analytical
complexities, including the claims that:
(a) social patterns and processes can be sorted
according to their scale of operation and, as
Gregory / The Dictionary of Human Geography 9781405132879_4_S Final Proof page 664 1.4.2009 3:23pm
SATISFICING BEHAVIOUR