⚛️
Linear Algebra
  • Basic Operations
  • Norms
  • Vector Properties
  • Matrix Properties
  • Special Matrices & Vectors
  • Eigen Decomposition
  • Quadratic Forms & Positive Definiteness
  • Singular Value Decomposition
Powered by GitBook
On this page
  • Euclidean Norm
  • Absolute Value of A Complex Number
  • Length of A 2D Vector
  • Slicing & Dicing: Dot Product
  • Algorithm: Dot Product
  • 1-Norm
  • ​p-Norm
  • ∞​-Norm
  • ​Frobenius Norm

Norms

Norm is the generalization of the notion of "length" to vectors, matrices and tensors.

A norm is any function fff that statisfies the following 3 conditions:

  • f(x)=0=>x=0f(x) = 0 => x = 0f(x)=0=>x=0

  • Triangular Inequality: f(x+y)≤f(x)+f(y)f(x+y) \le f(x) + f(y)f(x+y)≤f(x)+f(y)​

  • Linearity: f(αx)=∣α∣f(x)f(\alpha x) = |\alpha|f(x) f(αx)=∣α∣f(x)

There are 2 reasons to use norms:

  • To estimate how "big" a vector or matrix or a tensor is.

  • To estimate "how close" one tensor is to another:

    • ∣∣△v∣∣=∣∣v1−v2∣∣||\triangle{v}|| = ||v_1 - v_2|| \\∣∣△v∣∣=∣∣v1​−v2​∣∣

Euclidean Norm

The euclidean norm is given by the below equation:

∣∣v∣∣2=(v12+v22+...+vn2)1/2||v||_2 = (v_1^2+v_2^2 + ... + v_n^2)^{1/2}∣∣v∣∣2​=(v12​+v22​+...+vn2​)1/2

​It is also called 2-norm or L2 norm. A vector of length one is said to be a unit vector.

There is a relation between euclidean norm and dot product.

vTv=(v0v1...vn−1)T(v0v1...vn−1)=(v0v1...vn−1)(v0v1...vn−1)=v02+v12+...+vn−12v^Tv = \begin{pmatrix} v_0 \\ v_1 \\ ... \\ v_{n-1} \\ \end{pmatrix}^T \begin{pmatrix} v_0 \\ v_1 \\ ... \\ v_{n-1} \\ \end{pmatrix} \\ = \begin{pmatrix} v_0 & v_1 & ... &v_{n-1} \end{pmatrix} \begin{pmatrix} v_0 \\ v_1 \\ ... \\ v_{n-1} \\ \end{pmatrix} \\ = v^2_0 + v^2_1 + ... + v^2_{n-1}vTv=​v0​v1​...vn−1​​​T​v0​v1​...vn−1​​​=(v0​​v1​​...​vn−1​​)​v0​v1​...vn−1​​​=v02​+v12​+...+vn−12​

Thus, we can observe that:

∣∣v∣∣2=vTv||v||_2 = \sqrt{v^Tv}∣∣v∣∣2​=vTv​

The length of a vector also equals the square root of the dot product of the vector with itself.

Absolute Value of A Complex Number

Consider the following complex number:

χ=χr+χci,wherei=−1\chi = \chi_r + \chi_c i, \\ where \hspace{2mm}i = \sqrt{-1}χ=χr​+χc​i,wherei=−1​

Then, absolute value of a complex number is given by

∣χ∣=χr2+χc2=χ‾χwhereχ‾=χr−χci|\chi| = \sqrt{\chi^{2}_{r} + \chi^{2}_{c}} = \sqrt{\overline{\chi}\chi} \\ where \hspace{2mm} \overline{\chi} = \chi_r - \chi_ci∣χ∣=χr2​+χc2​​=χ​χ​whereχ​=χr​−χc​i

χ‾\overline{\chi}χ​ is also called the conjugate of χ\chiχ. Following are the properties of absolute value of a complex number:

  • χ≠0=>∣χ∣>0\chi \neq 0 => |\chi| > 0χ=0=>∣χ∣>0. This also means that the norm is positive definite.

  • ∣αχ∣=∣α∣∣χ∣|\alpha\chi| = |\alpha||\chi|∣αχ∣=∣α∣∣χ∣. This means that the norm is homogenous.

  • ∣χ+ψ∣≤∣χ∣+∣ψ∣|\chi + \psi| \leq |\chi| + |\psi|∣χ+ψ∣≤∣χ∣+∣ψ∣. This means that the norm obeys triangle inequality.

Length of A 2D Vector

Let x∈R2andx=(χ0χ1)x \isin \mathbb{R}^2 \hspace{2mm} and \hspace{2mm} x = \begin{pmatrix} \chi_0 \\ \chi_1 \end{pmatrix}x∈R2andx=(χ0​χ1​​). The (Euclidean) length of xxx is given by:

χ02+χ12\sqrt{\chi^{2}_{0} + \chi^{2}_{1}}χ02​+χ12​​

Slicing & Dicing: Dot Product

Consider the dot product of 2 vectors xxx and yyy. The dot product can be given:

  • Partitioning the vectors making sure that corresponding sub-vectors have the same size.

  • Then, we take the dot product of the corresponding sub-vectors and add all those together.

xTy=(x0x1...xN−1)=(y0y1...yN−1)=x0Ty0+x1Ty1+...+xN−1TyN−1=∑i=0N−1xiTyix^Ty = \begin{pmatrix} x_0\\ \hline \\ x_1 \\ \hline \\ ... \\ \hline \\ x_{N-1} \end{pmatrix} = \begin{pmatrix} y_0\\ \hline \\ y_1 \\ \hline \\ ... \\ \hline \\ y_{N-1} \end{pmatrix} \\ = x^T_0y_0 + x^T_1y_1 + ... + x^T_{N-1}y_{N-1} \\ = \sum^{N-1}_{i=0} x^T_iy_ixTy=​x0​x1​...xN−1​​​​=​y0​y1​...yN−1​​​​=x0T​y0​+x1T​y1​+...+xN−1T​yN−1​=i=0∑N−1​xiT​yi​

Algorithm: Dot Product

Following details the algorithm to calculate dot product using slicing and dicing:

Partitionx→(xTxB),y→(yTyB)wherexTandyThave0elementsα:=0whilem(xT)<m(x)doRepartition(xT⋯xB)→(x0⋯χ1x2),(yT⋯yB)→(y0⋯ψ1y2)whereχ1has1row,ψ1has1rowα:=χ1×ψ1+αContinuewith(xT⋯xB)←(x0χ1⋯x2),(yT⋯yB)←(y0ψ1⋯y2)endwhilePartition \hspace{1.5mm} x \rightarrow \begin{pmatrix} x_T \\ \hline \\ x_B \end{pmatrix}, y \rightarrow \begin{pmatrix} y_T \\ \hline \\ y_B \end{pmatrix} \\ \hspace{5mm} where \hspace{1.5mm} x_T \hspace{1.5mm} and \hspace{1.5mm} y_T \hspace{1.5mm} have \hspace{1.5mm} 0 \hspace{1.5mm} elements \\ \alpha := 0 \\ while \hspace{1.5mm} m(x_T) < m(x) \hspace{1.5mm} do \\ \hspace{5mm} Repartition \\ \hspace{5mm} \begin{pmatrix} x_T \\ \cdots \\ x_B \end{pmatrix} \rightarrow \begin{pmatrix} x_0 \\ \cdots \\ \chi_1 \\ \hline \\ x_2 \end{pmatrix} , \begin{pmatrix} y_T \\ \cdots \\ y_B \end{pmatrix} \rightarrow \begin{pmatrix} y_0 \\ \cdots \\ \psi_1 \\ \hline \\ y_2 \end{pmatrix} \\ where \hspace{1.5mm} \chi_1 \hspace{1.5mm} has \hspace{1.5mm} 1 \hspace{1.5mm} row, \psi_1 \hspace{1.5mm} has \hspace{1.5mm} 1 \hspace{1.5mm} row \\ \alpha := \chi_1 \times \psi_1 + \alpha \\ Continue \hspace{1.5mm} with \\ \hspace{5mm} \begin{pmatrix} x_T \\ \cdots \\ x_B \end{pmatrix} \leftarrow \begin{pmatrix} x_0 \\ \hline \\ \chi_1 \\ \cdots \\ x_2 \end{pmatrix} , \begin{pmatrix} y_T \\ \cdots \\ y_B \end{pmatrix} \leftarrow \begin{pmatrix} y_0 \\ \hline \\ \psi_1 \\ \cdots \\ y_2 \end{pmatrix} \\ endwhilePartitionx→​xT​xB​​​​,y→​yT​yB​​​​wherexT​andyT​have0elementsα:=0whilem(xT​)<m(x)doRepartition​xT​⋯xB​​​→​x0​⋯χ1​x2​​​​,​yT​⋯yB​​​→​y0​⋯ψ1​y2​​​​whereχ1​has1row,ψ1​has1rowα:=χ1​×ψ1​+αContinuewith​xT​⋯xB​​​←​x0​χ1​⋯x2​​​​,​yT​⋯yB​​​←​y0​ψ1​⋯y2​​​​endwhile

1-Norm

This is given by the following equation:

∣∣v∣∣1=∣v1∣+∣v2∣+...+∣vn∣||v||_1 = |v_1|+|v_2| + ... + |v_n|∣∣v∣∣1​=∣v1​∣+∣v2​∣+...+∣vn​∣

​p-Norm

This is given the following equation:

​

∣∣v∣∣p=(∣v1∣p+∣v2∣p+...+∣vn∣p)1/p∀p≥1||v||_p = (|v_1|^p+|v_2|^p + ... + |v_n|^p)^{1/p} \\ \forall p \ge1∣∣v∣∣p​=(∣v1​∣p+∣v2​∣p+...+∣vn​∣p)1/p∀p≥1

∞​-Norm

∞-Norm is also called max-norm. Following is the given equation for it:

∣∣v∣∣∞=max(∣v∣1,∣v∣2,...,∣vn∣)||v||_ ∞ = max(|v|_1, |v|_2,..., |v_n|)∣∣v∣∣∞​=max(∣v∣1​,∣v∣2​,...,∣vn​∣)

​Frobenius Norm

Frobenius Norm is calculated for matrices:

AF=(∑i,jAi,j2)1/2A_F = (\sum _{i, j} A_{i, j}^2)^{1/2}AF​=(i,j∑​Ai,j2​)1/2

​Also, AfA_fAf​ and A2A_2A2​ are not the same as denoted by the below equation:

∣∣A∣∣F≠∣∣A∣∣2||A||_F \neq ||A||_2∣∣A∣∣F​=∣∣A∣∣2​
PreviousBasic OperationsNextVector Properties

Last updated 2 years ago

Page cover image