# Norms

Norm is the generalization of the notion of "length" to vectors, matrices and tensors.

A norm is any function $$f$$ that statisfies the following 3 conditions:

* $$f(x) = 0 => x = 0$$
* Triangular Inequality: $$f(x+y) \le f(x) + f(y)$$​
* Linearity: $$f(\alpha x) = |\alpha|f(x)$$

There are 2 reasons to use norms:

* To estimate how "big" a vector or matrix or a tensor is.
* To estimate "how close" one tensor is to another:&#x20;
  * $$||\triangle{v}|| = ||v\_1 - v\_2||   \\$$

## Euclidean Norm

The euclidean norm is given by the below equation:

$$
||v||\_2 = (v\_1^2+v\_2^2 + ... + v\_n^2)^{1/2}
$$

​It is also called 2-norm or L2 norm. A vector of length one is said to be a unit vector.

There is a relation between euclidean norm and dot product.&#x20;

$$
v^Tv = \begin{pmatrix}
v\_0 \\
v\_1 \\
... \\
v\_{n-1} \\
\end{pmatrix}^T
\begin{pmatrix}
v\_0 \\
v\_1 \\
... \\
v\_{n-1} \\
\end{pmatrix}
\\
\= \begin{pmatrix}
v\_0 &  v\_1 & ... \&v\_{n-1}
\end{pmatrix}
\begin{pmatrix}
v\_0 \\
v\_1 \\
... \\
v\_{n-1} \\
\end{pmatrix}
\\
==

v^2\_0 + v^2\_1 + ... + v^2\_{n-1}
$$

Thus, we can observe that:

$$
||v||\_2 = \sqrt{v^Tv}
$$

The length of a vector also equals the square root of the dot product of the vector with itself.

### Absolute Value of A Complex Number

Consider the following complex number:

$$
\chi = \chi\_r + \chi\_c i,
\ where \hspace{2mm}i = \sqrt{-1}
$$

Then, absolute value of a complex number is given by

$$
|\chi| = \sqrt{\chi^{2}*{r} + \chi^{2}*{c}} = \sqrt{\overline{\chi}\chi}
\ where \hspace{2mm} \overline{\chi} = \chi\_r - \chi\_ci
$$

$$\overline{\chi}$$ is also called the conjugate of $$\chi$$. Following are the properties of absolute value of a complex number:

* $$\chi \neq 0 => |\chi| > 0$$. This also means that the norm is positive definite.
* $$|\alpha\chi| = |\alpha||\chi|$$. This means that the norm is homogenous.
* $$|\chi + \psi| \leq |\chi| + |\psi|$$. This means that the norm obeys triangle inequality.

### Length of A 2D Vector

Let $$x \isin \mathbb{R}^2 \hspace{2mm} and \hspace{2mm} x =  \begin{pmatrix} \chi\_0 \ \chi\_1 \end{pmatrix}$$. The (Euclidean) length of $$x$$ is given by:

$$
\sqrt{\chi^{2}*{0} + \chi^{2}*{1}}
$$

### Slicing & Dicing: Dot Product

Consider the dot product of 2 vectors $$x$$ and $$y$$. The dot product can be given:

* Partitioning the vectors making sure that corresponding sub-vectors have the same size.
* Then, we take the dot product of the corresponding sub-vectors and add all those together.

$$
x^Ty = \begin{pmatrix}
x\_0\\
\hline \\
x\_1 \\
\hline \\
... \\
\hline \\
x\_{N-1}
\end{pmatrix} =
\begin{pmatrix}
y\_0\\
\hline \\
y\_1 \\
\hline \\
... \\
\hline \\
y\_{N-1}
\end{pmatrix}
\\
\= x^T\_0y\_0 + x^T\_1y\_1 + ... + x^T\_{N-1}y\_{N-1}
\\
\= \sum^{N-1}\_{i=0} x^T\_iy\_i
$$

### Algorithm: Dot Product

Following details the algorithm to calculate dot product using slicing and dicing:

$$
Partition \hspace{1.5mm} x \rightarrow
\begin{pmatrix}
x\_T \\
\hline \\
x\_B
\end{pmatrix},
y \rightarrow
\begin{pmatrix}
y\_T \\
\hline \\
y\_B
\end{pmatrix}
\\
\hspace{5mm} where \hspace{1.5mm} x\_T \hspace{1.5mm} and \hspace{1.5mm} y\_T \hspace{1.5mm} have \hspace{1.5mm} 0 \hspace{1.5mm} elements
\\
\alpha := 0
\\
while \hspace{1.5mm} m(x\_T) < m(x) \hspace{1.5mm} do
\\
\hspace{5mm} Repartition
\\
\hspace{5mm}
\begin{pmatrix}
x\_T \\
\cdots \\
x\_B
\end{pmatrix}
\rightarrow
\begin{pmatrix}
x\_0 \\
\cdots \\
\chi\_1 \\
\hline \\
x\_2
\end{pmatrix} ,
\begin{pmatrix}
y\_T \\
\cdots \\
y\_B
\end{pmatrix}
\rightarrow
\begin{pmatrix}
y\_0 \\
\cdots \\
\psi\_1 \\
\hline \\
y\_2
\end{pmatrix}
\\
where \hspace{1.5mm} \chi\_1 \hspace{1.5mm} has \hspace{1.5mm} 1 \hspace{1.5mm} row, \psi\_1 \hspace{1.5mm} has \hspace{1.5mm} 1 \hspace{1.5mm} row
\\
\alpha := \chi\_1 \times \psi\_1 + \alpha
\\
Continue \hspace{1.5mm} with
\\
\hspace{5mm}
\begin{pmatrix}
x\_T \\
\cdots \\
x\_B
\end{pmatrix}
\leftarrow
\begin{pmatrix}
x\_0 \\
\hline \\
\chi\_1 \\
\cdots \\
x\_2
\end{pmatrix} ,
\begin{pmatrix}
y\_T \\
\cdots \\
y\_B
\end{pmatrix}
\leftarrow
\begin{pmatrix}
y\_0 \\
\hline \\
\psi\_1 \\
\cdots \\
y\_2
\end{pmatrix}
\\
endwhile
$$

## 1-Norm

This is given by the following equation:

$$
||v||\_1 = |v\_1|+|v\_2| + ... + |v\_n|
$$

## ​p-Norm

This is given the following equation:

​

$$
||v||\_p = (|v\_1|^p+|v\_2|^p + ... + |v\_n|^p)^{1/p}
\\
\forall p \ge1
$$

## ∞​-Norm

∞-Norm is also called max-norm. Following is the given equation for it:

$$
||v||\_ ∞ = max(|v|\_1, |v|\_2,..., |v\_n|)
$$

## ​Frobenius Norm

Frobenius Norm is calculated for matrices:

$$
A\_F = (\sum *{i, j} A*{i, j}^2)^{1/2}
$$

​Also, $$A\_f$$ and $$A\_2$$ are not the same as denoted by the below equation:

$$
||A||\_F \neq ||A||\_2
$$
