- Composing linear transformations
- Matrix-matrix multiplication
Two different algorithms to calculate Matrix-vector Multiplication
- First one, is calculating by rows.
- Second one is by columns.
- When doing calculation, we always concern flops and memops. And later we will prove that, memops is much slower that flops.
- Let's take an example. If we want calculate . Normally, we will do, set , then
- Which means, we are going to spend the most time to transpose matrix A to B, and little time computing.
- What we want is, compute with without actually transposing.
- The solution is, we can simply use columns of A for the dot products in the dot product based algorithm for .
- Let be an upper triangular matrix and be a vector. Consider
- We notice that (a vector of two zeroes) and hence we need not compute with it.
- Let's calculate the flops:
- The calculate step: , (without )
- if we set the size of to ,then the size of and should be .
- flops() = 1, flops() = , flops() = 1
- So the flops =
- We purposely chose the matrix on the right to be symmetric. We notice that , , and .
- So we just need to change the step of calculation
- By rows:
- By columns:
- By rows:
- Let and both be linear transformations and, for all , define the function by . Then is a linear transformations.
Then C = AB means that
A table of matrix-matrix multiplications with matrices of special shape is given at the end of this unit.
- Let and . Then the outer product of x and y is given by . Notice that this yields an matrix:
- The cost of memops of matrix-matrix multiplication is .