Understanding Machine Learning: From Theory to Algorithms

(Jeff_L) #1

Appendix C Linear Algebra


C.1 Basic Definitions


In this chapter we only deal with linear algebra over finite dimensional Euclidean
spaces. We refer to vectors as column vectors.
Given twoddimensional vectorsu,v∈Rd, their inner product is

〈u,v〉=

∑d

i=1

uivi.

The Euclidean norm (a.k.a. the` 2 norm) is‖u‖=


〈u,u〉. We also use the` 1
norm,‖u‖ 1 =

∑d
i=1|ui|and the`∞norm‖u‖∞= maxi|ui|.
A subspace ofRdis a subset ofRdwhich is closed under addition and scalar
multiplication. The span of a set of vectorsu 1 ,...,ukis the subspace containing
all vectors of the form

∑k

i=1

αiui

where for alli,αi∈R.
A set of vectorsU={u 1 ,...,uk}is independent if for everyi,uiis not in the
span ofu 1 ,...,ui− 1 ,ui+1,...,uk. We say thatUspans a subspaceVifVis the
span of the vectors inU. We say thatUis abasisofVif it is both independent
and spansV. The dimension ofVis the size of a basis ofV(and it can be verified
that all bases ofVhave the same size). We say thatUis an orthogonal set if for
alli 6 =j,〈ui,uj〉= 0. We say thatUis an orthonormal set if it is orthogonal
and if for everyi,‖ui‖= 1.
Given a matrixA∈Rn,d, the range ofAis the span of its columns and the
null space ofAis the subspace of all vectors that satisfyAu= 0. The rank ofA
is the dimension of its range.
The transpose of a matrixA, denotedA>, is the matrix whose (i,j) entry
equals the (j,i) entry ofA. We say thatAis symmetric ifA=A>.

Understanding Machine Learning,©c2014 by Shai Shalev-Shwartz and Shai Ben-David
Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link tohttp://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
Free download pdf