Final_1.pdf

(Tuis.) #1

normal distributions, then the typical solution to the overdetermined set is
obtained by applying the least squares method:


(4.12)


That is, we multiply both sides by the transpose of H 2 , to get a system of
three equations with three unknowns and then solve the system of equa-
tions. This would be our solution method in the normal course.
Now let us move on to the next observation y 3. Upon obtaining the new
data point, the estimate of our state is determined by the solution of the
equation


Y 3 =H 3 X 3 +h 3

a system of seven equations with four unknown variables. We could then use
the typical approach to solve the overdetermined equations and obtain an
estimate for the value of x 3. Note, however, that as the number of observa-
tions increases, the size of the matrices grows, and the computational costs
could potentially go up. It is here that the Kalman-filtering algorithm comes
to our rescue. The results from the previous computations are used in an it-
erative fashion to estimate the value of the next state. The value of x 2 and its
variance as calculated in the previous step is used in the evaluation of the
statex 3 , thereby keeping the computational cost of evaluating the next step
the same regardless of how far down the time scale we are.
Regardless, the end result of the state estimate in the Kalman-filtering
case is the same as solving the set of equations using the least squares ap-
proach. Note that we have made an important assumption in the process;
that is, the state variance at each time step is equal to the observation vari-
ance. (This is in addition to the assumption of independence of the error dis-
tributions.) Thus, with the preceding assumptions, the Kalman filter boils
down to a least squares solution of equations. The twist is that the solution
is calculated in a recursive fashion. This version of the Kalman filter is there-
fore known as the recursiveleast squares method.
The assumption of identical and independent error distributions mani-
fests itself in the covariance matrix of the errors. The independence also im-
plies that the errors are not correlated. Therefore, the off-diagonal elements
in the covariance matrix are all zero. The diagonal elements in the covari-
ance matrix are the variances of the error terms. If they are drawn from iden-
tical distributions, then the variances should be the same. Therefore, the
covariance matrix in this case may be represented as the identity matrix mul-
tiplied by a constant.
Let us now turn our attention back to the solution of the preceding
model. It turns out that the estimated state at a given time for that set of


XHHHY 222 TT


1
=() 22


62 BACKGROUND MATERIAL

Free download pdf