Linear Algebra

June 30, 2019

Linear Combination

We have 3 vectors a , b , c . If we connect each vector from head to tail, we get the sum of the vectors, vector d.
We can stretch and shrink a , b , and c , to get many different vectors. If you take all of the possible combinations of stretching and shrinking a , b , c , all the resulting vectors are called the linear combination of a, b and c .

Linear Independence

We have 3 vectors a , b , c . Earlier we saw that the linear combination of these 3 vectors can give us many different vectors. What if we added two of these vectors together, can we get the third vector?
We see that we can describe vector a as a linear combination of vectors b and c .
In fact b can be described with vector a and c , and likewise for vector c .
[Press GO to see this in action]

This means that one of these vectors is redundant and not linearly independent! If we were to remove one of our vectors, we can still describe the same space (by stretching, shrinking, and combining with each other) as we could with 3 vectors. Note, if we have two vectors that lie in the same direction, then those won’t be linearlly indepdendent!

Basis

Adding vectors v1 and v2 gets us vector v3.

By stretching and shrinking vectors v1 and v2, the resulting vector v3 can hit every point on this plot. We call all of those points our vector space.
[Hit Go! to see this in action]

Vectors v1 and v2 here are our basis vectors. Basis vectors must be linearly independent. Here we see that there is no way I can get vector v2 by stretching and shrinking v1.

Non Orthogonal Basis

Our vectors don't need to be orthogonal (cross at 90 deg). v1 and v2 can be basis vectors since they are linearly independent. Meaning v3 can hit all grey points by stretching and shrinking v1 and v2 .

Note: We don't have to stretch or shrink our vectors such they land on each grey point. These vectors can stretch and shrink such that they land anywhere!

Kernel

The Kernel of a linear transformation (red points) is the space that goes to 0 during the linear transformation.
When no dimensionality reduction happens eg. 2d to 1d, the only point in the kernel will be 0. 0 is always in the kernel.

The following is the matrix which describes this linear transformation. $$ \begin{bmatrix} 1&0\\ 2&0\\ \end{bmatrix}$$

Image

The Image of a linear transformation (red points) is the space (span) which the linear transformation maps to.

The following is the matrix which describes this linear transformation. $$ \begin{bmatrix} 1&0\\ 2&0\\ \end{bmatrix}$$

comments powered by Disqus