Analysis of Algorithms : An Active Learning Approach

(Ron) #1
4.2 MATRIX MULTIPLICATION 113

work was absolutely necessary, and they eventually found algorithms that mul-
tiply matrices faster.


■ 4.2.1 Winograd’s Matrix Multiplication
If you look at each element of the result of a matrix multiplication, you will
see that it is nothing more than the dot product of the corresponding row and
column of the original matrices. We can also notice something more in that
this multiplication can be factored in a way that allows us to preprocess some of
the work.
Consider two of these vectors: V = (v 1 ,v 2 ,v 3 ,v 4 ) and W = (w 1 ,w 2 ,w 3 ,w 4 ).
Their dot product is given by


But this can be factored into the following:


The reader should be able to show that these two are the same. It would appear
that the second of these equations actually does more work because we can
count six multiplications verses four and ten additions verses three. What might
not be obvious is that the last few terms can be preprocessed and stored for
each row of the first matrix and for each column of the second. This means
that in practice we will only have to do the first two multiplications and five
additions along with an additional two additions to include the preprocessed
values.
The full Winograd’s matrix multiplication algorithm for multiplying G (size
ab) and H (size bc) to get result R (size ac) is


d = b/2
// calculate rowFactors for G
for i = 1 to a do
rowFactor[i] = Gi,1 Gi,2
for j = 2 to d do
rowFactor[i] = rowFactor[i] + Gi,2j-1
Gi,2j
end for j
end for i


// calculate columnFactors for H
for i = 1 to c do
columnFactor[i] = H1,i* H2,i


VW• = v 1 *w 1 +v 2 *w 2 +v 3 *w 3 +v 4 *w 4

VW• ()v 1 +w 2 *()v 2 +w 1 ()v 3 +w 4 *()v 4 +w 3


  • v 1


+
*v 2 – v 3 *v 4 – w 1 *w 2 – w 3 *w 4

=
Free download pdf