Microsoft Word - SustainabilityReport_BCC.doc

(Barry) #1

(CBI, 2007). Contribution layers include timber supply based on forest inventory sample
plot observations (FIA, 2007), water buffers based on national hydrological data
(Steeves et al., 1994), and habitat for endangered species based on information
obtained from NatureServe (http://www.natureserve.org/explorer/ranking.htm). Data on
which spatially explicit projected housing density values are obtained, the primary threat
layer, is based on U.S. Census data and a layer depicting transportation networks. The
data from all sources are aggregated at the level of watersheds, which is of more coarse
resolution than data for any of the underlying sources.


3.4. Role of computer experiments
Although computer-aided discovery predated 1961, there is no more appropriate
way to broach the subject of computer experiments in climatology, ecology, and complex
systems in general than to mention the pioneering work of Edward Lorenz. However, the
field of computer experimentation has developed immensely since the time of Lorenz's
serendipitous discoveries as have the computing machines themselves.
Computer experimentation encompasses modern Monte Carlo methods of
statistics including Efron's bootstrap (and resampling methods more generally), Markov
chain Monte Carlo methods, stochastic approximation for optimization and equation
solving, and formal statistical design-based methods for understanding how the output of
a complex system f() (e.g., the mathematical/simulation process models) depends on
inputs x (driving variables and parameters) and on f() itself (e.g., autoregressive-type
relationships).
Although all varieties of computer experimentation are likely to find applications
in the science of sustainability and will require further development, modification, and
adaptation, it is the applications to understanding complex models of complex systems
and processes that will most assuredly need further development by mathematical and
statistical scientists due to its importance to the problems of quantifying uncertainty of all
forms. Of the latter there are three main constituents: imperfect understanding of the
map x -> f(x) even when x and f() are assumed free of uncertainty; uncertainty in f() as a
model of a real process or system (e.g., due to process or parameter uncertainty); and
uncertainty in inputs x (e.g., due to observational uncertainty or measurement error)
which for this discussion are assumed to include initial conditions, and "tuning" (or
"tunable") inputs (e.g., inputs that determine characteristics of f() that one might consider
adjusting to provide better fit to empirical data, e.g., via maximum likelihood).
A disciplined study of uncertainty uses the strategies of statistical experimental
design (e.g., fractional factorials, space-filling designs, blocking, and randomization) to
systematically vary those factors through which uncertainty is manifest (Santner, et al.,
2003). Computer experiments, in particular, are important for understanding and
quantifying uncertainty in highly complex models because these models may be too
computationally demanding to implement many times, as would occur if the model, f(x),
had to be evaluated for many levels of x and/or for many iterations (e.g., thousands of
MCMC iterations). Thus, we can draw-up methods developed for computer experiments
that enable the evaluation of f(x) for a “small” number of levels of x, reducing the
computational burden of evaluating f() an undue number of times. This may require
approaches to approximating f(), and development of effective strategies for doing so in
the context of models relevant to sustainability is an important research problem.

Free download pdf