8—Multivariable Calculus 195
These two equations determine the parametersαandβ.
α
∫L
0
dxsin^2
πx
L
=
∫L
0
dxf(x) sin
πx
L
β
∫L
0
dxsin^2
2 πx
L
=
∫L
0
dxf(x) sin
2 πx
L
The other integrals vanish because of the orthogonality ofsinπx/Landsin 2πx/Lon this interval.
What you get is exactly the coefficients of the Fourier series expansion off. The Fourier series is the
best fit (in the least square sense) of a sum of orthogonal functions tof. See section11.6for more on
this
Is it a minimum? Yes. Look at the coefficients ofα^2 andβ^2 in Eq. (8.30). They are positive;
+α^2 +β^2 has a minimum, not a maximum or saddle point, and there is no cross term inαβto mess
it up.
The distance function Eq. (8.29) is simply (the square of) the norm in the vector space sense of
the difference of the two vectorsfandg. Equations(6.12) and (6.7) here become
‖f−g‖^2 =
〈
f−g,f−g
〉
=
∫b
a
dx
∣
∣f(x)−g(x)
∣
∣^2
~e 1
~e 2
shortest distance
to the plane
The geometric meaning of Eq. (8.30) is that~e 1 and~e 2 provide a basis for the two dimensional space
α~e 1 +β~e 2 =αsin
πx
L
+βsin
2 πx
L
The plane is the set of all linear combinations of the two vectors, and for a general vector not in this
plane, the shortest distance to the plane defines the vectorinthe plane that is the best fit to the given
vector. It’s the one that’s closest. Because the vectors~e 1 and~e 2 are orthogonal it makes it easy to find
the closest vector. You require that the difference,~v−α~e 1 −β~e 2 has only an~e 3 component. That is
Fourier series.
Hessian
In this example leading to Fourier components, it’s pretty easy to see that you are dealing with a
minimum and not anything else. In other situations it may not be so easy. You may have a lot of
variables. You may have complicated cross terms. Isx^2 +xy+y^2 a minimum at the origin? Is
x^2 + 3xy+y^2? (Yes and No respectively.)
When there’s just one variable there is a simple rule that lets you decide. Check the second
derivative. If it’s positive you have a minimum; if it’s negative you have a maximum. If it’s zero you
have more work to do. Is there a similar method for several variables? Yes, and I’ll show it explicitly for
two variables. Once you see how to do it in two dimensions, the generalization toNis just a matter
of how much work you’re willing to do (or how much computer time you can use).
The Taylor series in two variables, Eq. (2.16), is to second order