Advanced Mathematics and Numerical Modeling of IoT

(lily) #1
(a)

(1) (2) (3) (4)

(5) (6) (7) (8)
(b)

(c)

(d)

Figure 10: Haar-like prototypes used in our algorithm: (a) edge
features, (b) line features, (c) center-surrounding features, and (d)
plate character features.


(i, j)

(0, 0)

(a)

(i, j)

(0, 0)

(b)

Figure 11: (a) Summed area of integral image and (b) summed area
of rotated integral image.


By using a transitional depiction of an image, the simple
rectangular features of an image are calculated. This is called
the integral image [ 8 ]. The integral image is an array that
contains the sums of the pixel intensity values located directly
to the left of a pixel and directly above the pixel at location
(푖,푗), inclusively. Therefore, if푂[푖,푗]is the original image and
OI[푖,푗]is the integral image, the integral image is calculated
as shown in ( 3 ) and demonstrated inFigure 11:


OI[푖,푗] = ∑

푖耠≤푖,푗耠≤푗

푂(푖耠,푗耠). (3)

The features are rotated 45 degrees, similar to the line
feature shown inFigure 10(b)(5),aspresentedbyLienhart
and Maydt [ 9 ]. Such features require another transitional
depiction called the rotated integral image or rotated sum
auxiliary image. The rotated integral image is computed by
finding the sum of the pixel intensity values that are located
at a 45-degree angle to the left and above of the푖value and
below the푗value. Therefore, if푂[푖,푗]is the original image


and OR[푖,푗]is the rotated integral image, the integral image
is calculated as shown in ( 4 )andillustratedinFigure 11:

OR[푖,푗] = ∑

푖耠≤푖,푖耠≤푖−|푗−푗耠|

푂(푖耠,푗耠). (4)

(4) AdaBoost Algorithm for Training LP.Thecascadeboosted
classifier that is created in the Haar-like features training
process for several LP samples locates the LP extremely fast
and correctly. For each feature, the weak learner determines
the optimal threshold classification function, such that the
minimum number of examples is misclassified. Thus, a weak
classifierℎ(푥,푓,푝,휃)consists of a feature (푓), a threshold (휃),
and a polarity (푝) that indicates the direction of the inequality
sign:

ℎ(푥,푓,푝,휃)={

1 if푝푓(푥)<푝휃
0 otherwise.

(5)

The boosting process and the AdaBoost algorithm for
classifier learning are as follows.

(1) Give sample images(푥 1 ,푦 1 )⋅⋅⋅(푥푛,푦푛),where푦푖=0
and 1 for negative and positive samples, respectively.
(2) Initialize weights휔 1 ,푖=1/2푚and 1/2푙for푦푖 =0
and 1, respectively, where푚and푙are the number of
negatives and positives, respectively.
(3) For푡=1⋅⋅⋅푇,

(a) normalize the weights

휔푡,푖=

휔푡,푗

∑푛푗=1휔푡,푗

(6)

such that휔푡is a probability distribution.
(b) Select the best weak classifier with respect to the
weighted error

휀푡=min
푓,푝,휃

∑휔푖儨儨儨儨ℎ(푥푖,푓,푝,휃)−푦푖儨儨儨儨. (7)

(c) Defineℎ푡(푥) = ℎ(푥,푓푡,푝푡,휃푡),where푓푡,푝푡,and
휃푡are the minimizers of휀푡.
(d) Update the weights

휔푡+1,푖=휔푡,푖훽1−푒푡 푖, (8)

where푒푖=0if example푥푖is classified properly;
otherwise,푒푖=1and훽푡=푒푡/1 − 푒푡.

(4)Thefinalstrongclassifieris

퐶(푥)=

{{

{{

{

1



푡=1

훼푡ℎ푡(푥)≥

1

2



푡=1

훼푡

0 otherwise,

(9)

where훼푡=log1/훽푡.
Free download pdf