Norms in Linear Algebra

Maaz_Bin_Asad
3 min readJul 24, 2020

There exist hundreds of applications where you need to find the length or magnitude of a certain vector whether it being a school physics problem statement, a real life scenario or even the Machine Learning itself. Magnitude of vectors have solved many problems, especially in Machine Learning that have helped advance the development of models. I will discuss one such application in this article itself, which is regularization. Let’s dive into the norms of vectors.

Introduction

Norm is nothing but the magnitude or length of a vector in a vector space. A vector can be represented in a space similar to this figure

The black arrow represents the vector which has some length

This vector has some length in a vector space. Suppose, we say that the length of this vector is 5 units. Likewise, we say that the norm of this vector is 5 units

How to find norm of a vector

In this section, I will introduce three methods to find norm of a vector. Yes! the one you might have studied in your school textbooks is not only the one.

L1 Norm

The first is the L1 norm of a vector that is found by Manhattan distance of vector from the origin of vector space. The geometry that uses Manhattan distance for finding distances between points is Taxicab geometry. Let’s find a norm of a vector that is v = [ 1 2 3 ]. This vector can also be represented as i+2j+3k, where i,j,k are unit vectors along three axes in space. Mathematical representation of L1 norm is ||v||1. The norm of this vector is given by sum of absolute values of each scalars which are 1,2 and 3 in our case. Mathematically, ||v||1 = |1| + |2| + |3| which gives 6. Notice that this is nothing but Manhattan distance from origin if we represent scalars as |1–0| + |2–0| + |3–0|. L1 norm can be added to loss functions in Machine Learning to apply L1 regularization to prevent over fitting in models. The mathematical equation to add L1 norm to loss function being

We are just adding L1 norm to squared loss function

L2 Norm

Now comes the L2 norm. This is one of the most commonly used method to find norms of vectors. Taking same example, let our vector be [ 1 2 3 ]. The L2 norm of this vector is represented by ||v||2 that equates to sqrt(1²+2²+3²) that results in something like 3.74. Of course, the magnitude has changed in this method but it still represents norm. In this case, we calculate Euclidean distance from origin. For intuition, I am representing the equation as sqrt((1–0)²+(2–0)²+(3–0)²). This statement might look familiar now. Regularization technique can be applied with this norm as well

Again, just adding L2 norm to squared loss function

The value of norms cannot be negative right! (unless there is some relativity theory that refutes this point)

Vector Max Norm

Third one is the vector max norm that states that maximum of the given scalars (taking their absolute values) is the norm itself. In our previous case, it’s 3 right! For example, in a vector, v1=[ 1 -4 2 ], it is 4.

Conclusion

If you are asked to find norms of the vectors, just ask the questioner to specify the method from above three and implement the algorithm to find it. You will find nothing but the length of that particular vector in vector space.

--

--

Maaz_Bin_Asad

I write blogs, poems and codes | I find myself in Machine Learning | Electronics Engineering (ZHCET)