⚛️
Linear Algebra
  • Basic Operations
  • Norms
  • Vector Properties
  • Matrix Properties
  • Special Matrices & Vectors
  • Eigen Decomposition
  • Quadratic Forms & Positive Definiteness
  • Singular Value Decomposition
Powered by GitBook
On this page
  • Linear Combination
  • Cost
  • Linear Combination of Unit Basis Vector
  • Slicing & Dicing: AXPY Operation
  • Algorithm: AXPY Operation
  • Summary
  • ​Span
  • Linear Independence

Vector Properties

Linear Combination

For a set of vectors { v(1),v(2),...,v(n)v^{(1)}, v^{(2)},..., v^{(n)}v(1),v(2),...,v(n)} and a set of scalars {α1,α2,...,αn\alpha_{1}, \alpha_{2}, ... , \alpha_{n}α1​,α2​,...,αn​}, where vi∈Rmv^i \isin \mathbb{R}^mvi∈Rm and αi∈A\alpha_{i} \isin \mathbb{A}αi​∈A , linear combination is given by:

∑iαiv(i),αiisscalar=α1v1+α2v2+...+αnvn\sum_i\alpha_iv^{(i)},\hspace{1cm}\alpha_i \hspace{0.1cm} is \hspace{0.1cm} scalar \\ = \alpha_1v^{1}+\alpha_2v^{2}+...+ \alpha_nv^{n}i∑​αi​v(i),αi​isscalar=α1​v1+α2​v2+...+αn​vn

​For example

v3=v1+2v2=>v3=[v1v2][12]v_3 = v_1+2v_2 \\ => v_3 = \begin{bmatrix} v_1 & v_2 \end{bmatrix} \begin{bmatrix} 1\\ 2 \end{bmatrix}v3​=v1​+2v2​=>v3​=[v1​​v2​​][12​]

Matrix multiplications can be interpreted in terms of linear combinations of columns. For example,

v=[v1v2]A=[1324]v×A=[v1+2v23v1+4v2]v = \begin{bmatrix} v_1 & v_2 \end{bmatrix} \hspace{1cm} A = \begin{bmatrix} 1 & 3\\ 2 & 4 \end{bmatrix} \\ v \times A = \begin{bmatrix} v_1+2v_2 & 3v_1+4v_2 \end{bmatrix}v=[v1​​v2​​]A=[12​34​]v×A=[v1​+2v2​​3v1​+4v2​​]

Cost

An AXPY opeartion is a linear combination:

w:=α1v1+α2v2+...+αnvnw := \alpha_1v^1 + \alpha_2v^2 + ... + \alpha_nv^nw:=α1​v1+α2​v2+...+αn​vn

We note that the linear combination can implemented as nnn​ AXPY operations. This suggests that the cost is nnn times the cost of an AXPY operation with vectors of size mmm​: n×2m=2mnn \times 2m = 2mnn×2m=2mn flops and approximately n×3m=3mnn \times 3m = 3mnn×3m=3mn memops.

However, one can actually do better. The vector www in the above equation is updated repeatedly. If this vector stays in the L1 cache of a computer, then it need not be repeatedly loaded from memory, and the cost becomes mmm memops (to load wwwinto cache), and then for each AXPY operation approximately mmm memops (to read vjv^jvj, ignoring the cost of reading αj\alpha_jαj​). Then, once www has been completely updated, it can be written back to memory. So, the total cost related to accessing memory becomes m+n×m+m=(n+2)m≈mnm + n \times m + m = (n + 2)m \approx mnm+n×m+m=(n+2)m≈mn memops.

Linear Combination of Unit Basis Vector

Given any x∈Rnx \isin \mathbb{R}^nx∈Rn, this vector can always be written as the linear combination of the unit basis vectors.

x=(x0x1...xn−1)x = \begin{pmatrix} x_0 \\ x_1 \\ ... \\ x_{n-1} \end{pmatrix}x=​x0​x1​...xn−1​​​

Then,

x=(x0x1...xn−1)=x0(10...0)+x1(01...0)+...+xn−1(00...1)=x0e0+x1e1+...+xn−1en−1=∑i=0n−1xieix = \begin{pmatrix} x_0 \\ x_1 \\ ... \\ x_{n-1} \end{pmatrix} = x_0 \begin{pmatrix} 1 \\ 0 \\ ... \\ 0 \end{pmatrix} + x_1 \begin{pmatrix} 0 \\ 1 \\ ... \\ 0 \end{pmatrix} + ... + x_{n-1} \begin{pmatrix} 0 \\ 0 \\ ... \\ 1 \end{pmatrix} \\ = x_0e_0 + x_1e_1 + ... + x_{n-1}e_{n-1} = \sum^{n-1}_{i=0} x_ie_i x=​x0​x1​...xn−1​​​=x0​​10...0​​+x1​​01...0​​+...+xn−1​​00...1​​=x0​e0​+x1​e1​+...+xn−1​en−1​=i=0∑n−1​xi​ei​

Slicing & Dicing: AXPY Operation

Let α∈R\alpha \isin \mathbb{R}α∈R and x,y∈Rnx, y \isin \mathbb{R}^nx,y∈Rn. We partition (slice & dice) these vectors as:

x=(x0x1...xN−1),y=(y0y1...yN−1)x = \begin{pmatrix} x_0 \\ \hline \\ x_1 \\ \hline \\ ... \\ \hline \\ x_{N-1} \end{pmatrix}, y = \begin{pmatrix} y_0 \\ \hline \\ y_1 \\ \hline \\ ... \\ \hline \\ y_{N-1} \end{pmatrix}x=​x0​x1​...xN−1​​​​,y=​y0​y1​...yN−1​​​​

Then,

αx+y=α(x0x1...xN−1)+(y0y1...yN−1)=(αx0+y0αx1+y1...αxN−1+yN−1)\alpha x + y = \alpha \begin{pmatrix} x_0 \\ \hline \\ x_1 \\ \hline \\ ... \\ \hline \\ x_{N-1} \end{pmatrix} + \begin{pmatrix} y_0 \\ \hline \\ y_1 \\ \hline \\ ... \\ \hline \\ y_{N-1} \end{pmatrix} = \begin{pmatrix} \alpha x_0 + y_0 \\ \hline \\ \alpha x_1 + y_1 \\ \hline \\ ... \\ \hline \\ \alpha x_{N-1} + y_{N-1} \\ \end{pmatrix}αx+y=α​x0​x1​...xN−1​​​​+​y0​y1​...yN−1​​​​=​αx0​+y0​αx1​+y1​...αxN−1​+yN−1​​​​

Algorithm: AXPY Operation

Following details the algorithm for AXPY operation using slicing & dicing:

Partitionx→(xTxB),y→(yTyB)wherexTandyThave0elementswhilem(xT)<m(x)doRepartition(xT⋯xB)→(x0⋯χ1x2),(yT⋯yB)→(y0⋯ψ1y2)ψ1:=αχ1+ψ1Continuewith(xT⋯xB)←(x0χ1⋯x2),(yT⋯yB)←(y0ψ1⋯y2)endwhilePartition \hspace{1.5mm} x \rightarrow \begin{pmatrix} x_T \\ \hline \\ x_B \end{pmatrix}, y \rightarrow \begin{pmatrix} y_T \\ \hline \\ y_B \end{pmatrix} \\ where \hspace{1.5mm} x_T \hspace{1.5mm} and \hspace{1.5mm} y_T \hspace{1.5mm} have \hspace{1.5mm} 0 \hspace{1.5mm} elements \\ while \hspace{1.5mm} m(x_T) < m(x) \hspace{1.5mm} do \\ Repartition \\ \begin{pmatrix} x_T \\ \cdots \\ x_B \end{pmatrix} \rightarrow \begin{pmatrix} x_0 \\ \cdots \\ \chi_1 \\ \hline \\ x_2 \end{pmatrix}, \begin{pmatrix} y_T\\ \cdots \\ y_B \end{pmatrix} \rightarrow \begin{pmatrix} y_0\\ \cdots \\ \psi_1 \\ \hline \\ y_2 \end{pmatrix} \\ \psi_1 := \alpha \chi_1 + \psi_1 \\ Continue \hspace{1.5mm} with \\ \begin{pmatrix} x_T \\ \cdots \\ x_B \end{pmatrix} \leftarrow \begin{pmatrix} x_0\\ \hline \\ \chi_1 \\ \cdots \\ x_2 \end{pmatrix} , \begin{pmatrix} y_T \\ \cdots \\ y_B \end{pmatrix} \leftarrow \begin{pmatrix} y_0 \\ \hline \\ \psi_1 \\ \cdots \\ y_2 \end{pmatrix} \\ end \hspace{1.5mm} whilePartitionx→​xT​xB​​​​,y→​yT​yB​​​​wherexT​andyT​have0elementswhilem(xT​)<m(x)doRepartition​xT​⋯xB​​​→​x0​⋯χ1​x2​​​​,​yT​⋯yB​​​→​y0​⋯ψ1​y2​​​​ψ1​:=αχ1​+ψ1​Continuewith​xT​⋯xB​​​←​x0​χ1​⋯x2​​​​,​yT​⋯yB​​​←​y0​ψ1​⋯y2​​​​endwhile

Summary

A linear combination of vectors scales the individual vectors and then adds them. Linear combinations are foundational to linear algebra.

​Span

Span of a set of vectors is the set of all vectors obtainable by a linear combination of the original vectors. For example, for v1=(1,0)&v2=(0,1)v_1 = (1, 0)\hspace{0.1cm}\&\hspace{0.1cm} v_2 = (0, 1)v1​=(1,0)&v2​=(0,1), span is given by α1v1+α2v2\alpha_1v_1+\alpha_2v_2α1​v1​+α2​v2​​.

The span of all the columns of a matrix is called the columns space. For example,

[122033]⟶α1(1,2,3)+α2(2,0,3)\begin{bmatrix} 1 & 2\\ 2 & 0\\ 3 & 3 \end{bmatrix} \longrightarrow \alpha_1(1,2,3) + \alpha_2(2,0,3)​123​203​​⟶α1​(1,2,3)+α2​(2,0,3)

​Further, note that, for the equation Ax=bAx = bAx=b​ has a solution only if b lies in the columns space of A.

Linear Independence

A set of vectors is linearly independent if none of these vectors can be written as a linear combination of the other vectors.

For example,

  • v1=(1,0),v2=(0,1)v_1=(1,0), v_2 = (0,1)v1​=(1,0),v2​=(0,1)​ are linearly independent.

  • v1=(1,0),v2=(0,1),v3=(3,4)v_1 = (1,0), v_2 = (0, 1), v_3 = (3, 4)v1​=(1,0),v2​=(0,1),v3​=(3,4) are linearly dependent as v3=3v1+4v2v_3=3v_1+4v_2v3​=3v1​+4v2​.

Mathematically, S={v1,v2,...,vn}S = \begin{Bmatrix} v_1, v_2,..., v_n \end{Bmatrix}S={v1​,v2​,...,vn​​}​is linearly independent if for the following linear combination:

α1v1+α2v2+...+αkvk=0\alpha_1v_1+\alpha_2v_2+...+\alpha_kv_k = 0α1​v1​+α2​v2​+...+αk​vk​=0

​which means that all the αi=0\alpha_i = 0αi​=0​.

PreviousNormsNextMatrix Properties

Last updated 2 years ago

Page cover image