Understanding Machine Learning: From Theory to Algorithms

(Jeff_L) #1
25.2 Feature Manipulation and Normalization 365

Adding` 1 regularization to a linear regression problem with the squared loss
yields the LASSO algorithm, defined as

argmin
w

(

1

2 m

‖Xw−y‖^2 +λ‖w‖ 1

)

. (25.8)

Under some assumptions on the distribution and the regularization parameter
λ, the LASSO will find sparse solutions (see, for example, (Zhao & Yu 2006)
and the references therein). Another advantage of the` 1 norm is that a vector
with low` 1 norm can be “sparsified” (see, for example, (Shalev-Shwartz, Zhang
& Srebro 2010) and the references therein).

25.2 Feature Manipulation and Normalization


Feature manipulations or normalization include simple transformations that we
apply on each of our original features. Such transformations may decrease the
approximation or estimation errors of our hypothesis class or can yield a faster
algorithm. Similarly to the problem of feature selection, here again there are no
absolute “good” and “bad” transformations, but rather each transformation that
we apply should be related to the learning algorithm we are going to apply on
the resulting feature vector as well as to our prior assumptions on the problem.
To motivatenormalization, consider a linear regression problem with the
squared loss. LetX ∈Rm,dbe a matrix whose rows are the instance vectors
and lety∈Rmbe a vector of target values. Recall that ridge regression returns
the vector

argmin
w

[

1

m

‖Xw−y‖^2 +λ‖w‖^2

]

= (2λmI+X>X)−^1 X>y.

Suppose thatd= 2 and the underlying data distribution is as follows. First we
sampleyuniformly at random from{± 1 }. Then, we setx 1 to bey+ 0. 5 α, where
αis sampled uniformly at random from{± 1 }, and we setx 2 to be 0. 0001 y. Note
that the optimal weight vector isw?= [0; 10000], andLD(w?) = 0. However,
the objective of ridge regression atw?isλ 108. In contrast, the objective of ridge
regression atw= [1; 0] is likely to be close to 0.25 +λ. It follows that whenever

λ > 100. (^825) − 1 ≈ 0. 25 × 10 −^8 , the objective of ridge regression is smaller at the
suboptimal solutionw= [1; 0]. Sinceλtypically should be at least 1/m(see
the analysis in Chapter 13), it follows that in the aforementioned example, if the
number of examples is smaller than 10^8 then we are likely to output a suboptimal
solution.
The crux of the preceding example is that the two features have completely
different scales. Feature normalization can overcome this problem. There are
many ways to perform feature normalization, and one of the simplest approaches
is simply to make sure that each feature receives values between−1 and 1. In
the preceding example, if we divide each feature by the maximal value it attains

Free download pdf