Systems Biology (Methods in Molecular Biology)

(Tina Sui) #1
given set of parameters from the experimental data. The precise
function is a sum of squared differences between the model and the
data, scaled by the experimental error:

χ^2 ¼

XN

i¼ 1

ðyiðpÞdiÞ^2
σ^2 i

ð 13 Þ

The species are normalized by the termσ that is equal to the
maximum value of species across the conditions considered. Thus,
less noisy signals are more weighted, and all measurements are
brought to the same scale. Since the signals are only available at
discrete time points the errors at each measurement time point are
summed. When the values of the objective functions attained, as for
example, for model A and model B differ only slightly, it is not clear
which one of the models is better suited to fitting the benchmark
problem.
Then a local approximation method to estimate of the confi-
dence intervals of the parameters should be applied. This may be
the Fisher-Information-Matrix [19]. To analyze each model’s sen-
sitivity to parameter variation, one considers the Hessian matrix


2
j,k¼

∂^2 χ^2 ðrÞ
∂logrj∂logrk

ð 14 Þ

that corresponds to approximating the surfaces of constant model
to anNpdimensional ellipsoids, whereNpis the number of the
parameters of the model. The principal axes of the ellipsoids are the
eigenvectors of the Hessian matrix and the width of the ellipsoids
along each principal axis is proportional to one over the square root
of the corresponding eigenvalue. The narrowest axes are calledstiff,
and the broadest axessloppy.
Expanding the second derivative:


2
χ^2 ðrÞ
∂logðpiÞ∂logðpjÞ

¼

XN

i¼ 1

∂rk
∂logðpiÞ

∂rk
∂logðpjÞ

þrk


2
rk
∂logðpiÞ∂logðpjÞ

!

ð 15 Þ

we see that the second term can be dropped in the case of a near-
perfect fit when eachrkis small. Denoting the matrix of first
derivatives as the Jacobian,

Jkj¼

∂rk
∂logðpjÞ

ð 16 Þ

we can then make the approximationHJTJ. In the Bayesian
statistics field this matrix is known as the Fisher Information Matrix

76 Rodolfo Guzzi et al.

Free download pdf