CS6910: Fundamentals of Deep Learning

Lecture 2: McCulloch Pitts Neuron, Thresholding Logic, Perceptrons, Perceptron Learning Algorithm and Convergence, Multilayer Perceptrons (MLPs), Representation Power of MLPs

Mitesh M. Khapra

IIT Madras

AI4Bharat

Learning Objectives

At the end of this lecture, student will have a good understanding of the following topics:

 

  • McCulloch Pitts Neuron,

  • Thresholding Logic

  • Perceptrons

  • Perceptron Learning Algorithm and Convergence

  • Multilayer Perceptrons (MLPs)

  • Representation Power of MLPs

Module 2.1: Biological Neurons

Mitesh M. Khapra

IIT Madras

AI4Bharat

The most fundamental unit of a deep neural network is called an artificial neuron

Why is it called a neuron ? Where does the inspiration come from ?

The inspiration comes from biology (more specifically, from the brain)

biological neurons = neural cells = neural processing units

We will first see what a biological neuron looks like

Artificial Neuron

$$x_1$$

$$w_1$$

$$x_2$$

$$x_3$$

$$y_1$$

$$\sigma$$

$$w_2$$

$$w_3$$

dendrite: receives signals from other neurons

synapse: point of connection to other neurons

soma: processes the information

axon: transmits the output of this neuron

Biological Neurons*

*Image adapted from

https://cdn.vectorstock.com/i/composite/12,25/neuron-cell-vector-81225.jpg

soma

dendrite

axon

synapse

Let us see a very cartoonish illustration of how a neuron works

Our sense organs interact with the outside  world

They relay information to the neurons

The neurons (may) get activated and produces a response (laughter in this case)

There is a massively parallel interconnected network of neurons

Of course, in reality, it is not just a single neuron which does all this

The sense organs relay information to the lowest layer of neurons

Some of these neurons may fire (in red) in response to this information and in turn relay information to other neurons they are connected to

These neurons may also fire (again, in red) and the process continues eventually resulting in a response (laughter in this case)

An average human brain has around \(10^{11} \)(100 billion) neurons!

This massively parallel network also ensures that there is division of work

Each neuron may perform a certain role or respond to a certain stimulus

This massively parallel network also ensures that there is division of work

Each neuron may perform a certain role or respond to a certain stimulus

A simplified illustration

fires if visual is funny

fires if speech style is funny

fires if text is funny

fires if at least 2 of 3 inputs fired

We illustrate this with the help of visual cortex (part of the brain) which deals with processing visual information

Starting from the retina, the information is relayed to several layers (follow the arrows)

We observe that the layers V1, V2 to AIT form a hierarchy (from identifying simple visual forms to high level objects)

The neurons in the brain are arranged in a hierarchy

Sample illustration of hierarchical processing*

________________________________________

*Idea borrowed from Hugo Larochelle’s lecture slides

Layer 1: detect edges & corners

Layer 3: detect high level objects, faces, etc. 

face

nose

mouth

eyes

Layer 2: form feature groups

  Disclaimer

I understand very little about how the brain works!

What you saw so far is an overly simplified explanation of how the brain works!

But this explanation suffices for the purpose of this course!






 

Module 2.2: McCulloch Pitts Neuron

Mitesh M. Khapra

IIT Madras

AI4Bharat

McCulloch (neuroscientist) and Pitts (logician) proposed a highly simplified computational model of the neuron (1943)

\(y = 0\) if any \(x_i\) is inhibitory, else

g(x_1,x_2,...,x_n) = g(x) = \sum_{i=1}^{n}\ x_i

\(y=f(g(x)) = 1\)     if   \(g(x) \geq    \theta\)

\(\theta\) is called the thresholding parameter

This is called the Thresholding Logic

\(f\)

\(g\)

\(x_1\)

\(x_2\)

\(x_n\)

..

..

\(y\in  \lbrace0,1\rbrace\)

\(g\) aggregates the inputs and the function \(f\) takes a decision based on this aggregation

The inputs can be excitatory or inhibitory

\(\in  \lbrace0,1\rbrace\)

  \(= 0\)     if   \(g(x) <   \theta\)

Let us implement some boolean functions using this McCulloch Pitts (MP) neuron ...

\(x_1\)

\(\theta\)

\(x_2\)

\(y\in  \lbrace0,1\rbrace\)

\(x_3\)

A McCulloch Pitts unit

3

1

0

1

0

* circle at the end indicates inhibitory input: if any inhibitory input is 1 the output will be 0

AND function

\(y\in  \lbrace0,1\rbrace\)

\(x_1\)

\(x_2\)

\(x_3\)

\(x_1\)

\(x_2\)

\(y\in  \lbrace0,1\rbrace\)

\(x_3\)

OR function

\(x_1\) AND  !\(x_2\)*

\(x_1\)

\(x_2\)

\(y\in  \lbrace0,1\rbrace\)

NOR function

\(x_1\)

\(x_2\)

\(y\in  \lbrace0,1\rbrace\)

NOT function

\(x_1\)

\(y\in  \lbrace0,1\rbrace\)

Can any boolean function be represented using a McCulloch Pitts unit?

Before answering this question let us first see the geometric interpretation of a MP unit...

A single MP neuron splits the input points (4 points for 2 binary inputs) into two halves

Points lying on or above the line \(\sum_{i=1}^{n}\ x_i-\theta=0\) and points lying below this line

In other words, all inputs which produce an output 0 will be on one side \((\sum_{i=1}^{n}\ x_i<\theta\)) of the line and all inputs which produce an output 1 will lie on the other side \((\sum_{i=1}^{n}\ x_i\geq \theta\)) of this line

Let us convince ourselves about this with a few more examples (if it is not already clear from the math)

\(y\in  \lbrace0,1\rbrace\)

1

\(x_1\)

\(x_2\)

OR function

x_1 + x_2 = \sum_{i=1}^{2}\ x_i \geq 1

\(x_1\)

\(x_2\)

\(y\in  \lbrace0,1\rbrace\)

AND function

x_1 + x_2 = \sum_{i=1}^{2}\ x_i \geq 2

2

\(x_1\)

\(x_2\)

\(y\in  \lbrace0,1\rbrace\)

Tautology (always ON)

0

What if we have more than 2 inputs?

Well, instead of a line we will have a plane

For the OR function, we want a plane such that the point (0,0,0) lies on one side and the remaining 7 points lie on the other side of the plane

1

\(x_1\)

\(x_2\)

\(y\in  \lbrace0,1\rbrace\)

\(x_3\)

OR

\(x_1 + x_2 + x_3= \theta = 1\)

  The story so far ...

A single McCulloch Pitts Neuron can be used to represent boolean functions which are linearly separable

Linear separability (for boolean functions) : There exists a line (plane) such that all inputs which produce a 1 lie on one side of the line (plane) and all inputs which produce a 0 lie on other side of the line (plane)









 

Module 2.3: Perceptron

Mitesh M. Khapra

IIT Madras

AI4Bharat

  The story ahead ...

What about non-boolean (say, real) inputs ?

Do we always need to hand code the threshold ?

Are all inputs equal ? What if we want to assign more weight (importance) to some inputs ?

What about functions which are not linearly separable ?










 

Frank Rosenblatt, an American psychologist, proposed the classical perceptron model (1958)

Refined and carefully analyzed by Minsky and Papert (1969) - their model is referred to as the perceptron model here

A more general computational model than McCulloch–Pitts neurons

Main differences: Introduction of numerical weights for inputs and a mechanism for learning these weights

Inputs are no longer limited to boolean values

$$x_1$$

$$x_2$$

$$x_n$$

$$y$$

$$w_1$$

$$w_2$$

$$w_n$$

$$..$$

$$..$$

$$..$$

$$..$$

if \displaystyle\sum_{\textcolor{red}{i=1}}^n \ w_i*x_i \geq \theta
if \displaystyle\sum_{\textcolor{red}{i=1}}^n \ w_i*x_i < \theta
y = 1
= 0

Rewriting the above,

if \displaystyle\sum_{\textcolor{red}{i=1}}^n \ w_i*x_i - \theta \geq 0
if \displaystyle\sum_{\textcolor{red}{i=1}}^n \ w_i*x_i - \theta < 0
y = 1
= 0

A more accepted convention,

if \displaystyle\sum_{\textcolor{red}{i=0}}^n \ w_i*x_i \geq 0
if \displaystyle\sum_{\textcolor{red}{i=0}}^n \ w_i*x_i < 0
y = 1
= 0

where, \(x_0 = 1\) and \(w_0 = -\theta\)

$$x_0=1$$

$$w_0=-\theta$$

$$x_1$$

$$w_1$$

$$x_2$$

$$x_n$$

$$w_2$$

$$w_n$$

$$..$$

$$..$$

$$..$$

$$..$$

$$y$$

We will now try to answer the following questions:

Why are we trying to implement boolean functions?

Why do we need weights ?

Why is \(w_0 = -\theta\) called the bias ?

x1 = isActorDamon
x2 = isGenreThriller
x3 = isDirectorNolan

Consider the task of predicting whether we would like a movie or not

Suppose, we base our decision on 3 inputs (binary, for simplicity)

Based on our past viewing experience (data), we may give a high weight to isDirectorNolan as compared to the other inputs

Specifically, even if the actor is not Matt Damon and the genre is not thriller we would still want to cross the threshold \(\theta\) by assigning a high weight to isDirectorNolan

$$x_1$$

$$w_1$$

$$x_2$$

$$y$$

$$w_2$$

$$w_3$$

$$x_0=1$$

$$w_0=-\theta$$

$$x_3$$

$$x_1$$

$$w_1$$

$$x_2$$

$$y$$

$$w_2$$

$$w_3$$

$$x_0=1$$

$$w_0=-\theta$$

$$x_3$$

x1 = isActorDamon
x2 = isGenreThriller
x3 = isDirectorNolan

\(w_0\) is called the bias as it represents the prior (prejudice)

A movie buff may have a very low threshold and may watch any movie irrespective of the genre, actor, director \([\theta=0]\)

On the other hand, a selective viewer may only watch thrillers starring Matt Damon and directed by Nolan \([\theta=3]\)

The weights \((w_1, w_2, ..., w_n)\) and the the bias \((w_0)\) will depend on the data (viewer history in this case)

What kind of functions can be implemented using the perceptron? Any difference from McCulloch Pitts neurons?

All inputs which produce a 1 lie on one side and all inputs which produce a 0 lie on the other side

In other words, a single perceptron can only be used to implement linearly separable functions

Then what is the difference?

We will first revisit some boolean functions and then see the perceptron learning algorithm (for learning weights)

McCulloch Pitts Neuron

(assuming no inhibitory inputs)

if \displaystyle\sum_{i=0}^n \ x_i \geq 0
if \displaystyle\sum_{i=0}^n \ x_i < 0
y = 1
= 0
if \displaystyle\sum_{i=0}^n \ \textcolor{red}{w_i}*x_i \geq 0
if \displaystyle\sum_{i=0}^n \ \textcolor{red}{w_i} * x_i < 0
y = 1
= 0

Perceptron

From the equations it should be clear that even a perceptron separates the input space into two halves

The weights (including threshold) can be learned and the inputs can be real valued

\(x_1\)

\(x_2\)

OR

\(0\)

\(1\)

\(0\)

\(1\)

\(0\)

\(0\)

\(1\)

\(1\)

\(0\)

\(1\)

\(1\)

\(1\)

w_0 + \sum_{i=1}^{2}\ w_ix_i < 0
w_0 + \sum_{i=1}^{2}\ w_ix_i \geq 0
w_0 + \sum_{i=1}^{2}\ w_ix_i \geq 0
w_0 + \sum_{i=1}^{2}\ w_ix_i \geq 0
w_0 + w_1 . 0 + w_2 . 0 < 0 \implies w_0 < 0
w_0 + w_1 . 0 + w_2 . 1 \geq 0 \implies w_2 \geq -w_0
w_0 + w_1 . 1 + w_2 . 0 \geq 0 \implies w_1 \geq -w_0
w_0 + w_1 . 1 + w_2 . 1 \geq 0 \implies w_1 + w_2 \geq -w_0

One possible solution to this set of inequalities is \(w_0 = -1\), \(w_1 = 1.1\),\(w_2 = 1.1\) (and various other solutions are possible)

Note that we can come up with a similar set of inequalities and find the value of \(\theta\) for a McCulloch Pitts neuron also (Try it!)

Module 2.4: Errors and Error Surfaces

Mitesh M. Khapra

IIT Madras

AI4Bharat

Let us fix the threshold (\(w_0 = -1\)) and try different values of \(w_1\), \(w_2\)

Say, \(w_1 = -1\), \(w_2 = -1\)

What is wrong with this line? 

Let's try some more values of \(w_1\), \(w_2\) and note how many errors we make

We are interested in those values of \(w_0\), \(w_1\),\( w_2\) which result in 0 error

Let us plot the error surface corresponding to different values of \(w_0\), \(w_1\), \( w_2\)

We make an error on 1 out of the 4 inputs

\(w_1\)

\(w_2\)

errors

\(-1\)

\(-1\)

\(1\)

\(1.5\)

\(0\)

\(1\)

\(10\)

\(-10\)

\(2\)

For ease of analysis, we will keep \(w_0\) fixed (-1) and plot the error for different values of \(w_1\), \(w_2\)

For a given \(w_0\), \(w_1\), \(w_2\) we will compute \(-w_0+w_1*x_1+w_2*x_2\) for all combinations of (\(x_1,x_2\)) and note down how many errors we make

For the OR function, an error occurs if      (\(x_1,x_2\)) = (\(0,0\)) but \(-w_0+w_1*x_1+w_2*x_2 \geq 0\) or if (\(x_1,x_2\)) \(\neq\) (\(0,0\)) but \(-w_0+w_1*x_1+w_2*x_2 < 0\)

We are interested in finding an algorithm which finds the values of \(w_1\), \(w_2\) which minimize this error

Module 2.5: Perceptron Learning Algorithm

Mitesh M. Khapra

IIT Madras

AI4Bharat

We will now see a more principled approach for learning these weights and threshold but before that let us answer this question...

Apart from implementing boolean functions (which does not look very interesting) what can a perceptron be used for ?

Our interest lies in the use of perceptron as a binary classifier. Let us see what this means...

Let us reconsider our problem of deciding whether to watch a movie or not

Suppose we are given a list of \(m\) movies and a label (class) associated with each movie indicating whether the user liked this movie or not : binary decision

Further, suppose we represent each movie with \(n\) features (some boolean, some real valued)

We will assume that the data is linearly separable and we want a perceptron to learn how to make this decision

In other words, we want the perceptron to find the equation of this separating plane (or find the values of \(w_0,w_1,w_2,...,w_m\))

x1 = isActorDamon
x2 = isGenreThriller
x3 = isDirectorNolan

...

...

x4 = imdbRating

(scaled to 0 to 1)

xn = criticsRating

(scaled to 0 to 1)

$$w_1$$

$$w_2$$

$$w_n$$

$$..$$

$$..$$

$$w_0=-\theta$$

$$x_1$$

$$x_2$$

$$x_n$$

$$y$$

$$..$$

$$..$$

$$x_0=1$$

Why would this work ?

To understand why this works we will have to get into a bit of Linear Algebra and a bit of geometry...

\(P \gets\) \(inputs\) \(with\) \(label\)  \(1\);

\(N \gets\) \(inputs\) \(with\) \(label\)  \(0\);

Initialize \(\text w\) randomly;

while \(!convergence\)  do

Pick random \(\text x\) \(\isin\) \( P\)  \(\cup\) \( N\)  ;

\(\text w\) = \(\text w\) \(+\) \(\text x\) ;

end

end

//the algorithm converges when all the inputs
    are classified correctly

if \(\text x\) \(\isin\) \(\text P\)  \(and\)  \(\sum_{i=0}^{n}\ w_i*x_i  < 0\) then

end

if \(\text x\) \(\isin\) \(\text N\)  \(and\)  \(\sum_{i=0}^{n}\ w_i*x_i  \geq 0\) then

\(\text w\) = \(\text w\) \(-\) \(\text x\) ;

Algorithm: Perceptron Learning Algorithm

We are interested in finding the line \(\text w^\text T \text x=0\) which divides the input space into two halves

Every point (\(\text x\)) on this line satisfies the equation \(\text w^\text T \text x=0\)

What can you tell about the angle (\(\alpha\)) between \(\text w\) and any point (\(\text x\)) which lies on this line ?

Since the vector \(\text w\) is perpendicular to every point on the line it is actually perpendicular to the line itself

The angle is \(90^\circ\)

(\(\because\) \(cos \alpha \)= \({w^Tx\over \parallel w \parallel \parallel x \parallel}\)  = \(0\))

Consider two vectors \(\text w\) and \(\text x\)

\(\text w\) \(=[w_0,w_1,w_2,...,w_n]\)

\(\text x\) \(=[1,x_1,x_2,...,x_n]\)

\(\text w \sdot \text x = \text w^\text T \text x = \displaystyle\sum_{i=0}^n \ w_i * x_i\)

We can thus rewrite the perceptron rule as

\(y = 1\)       \(if           \text w^\text T \text x \geq 0\)

\( = 0\)       \(if           \text w^\text T \text x < 0\)

Consider some points (vectors) which lie in the positive half space of this line (i.e., \(\text w^\text T \text x>0\))

What will be the angle between any such vector and \(\text w\) ?

What about points (vectors) which lie in the negative half space of this line (i.e., \(\text w^\text T \text x<0\))

What will be the angle between any such vector and \(\text w\) ? 

Of course, this also follows from the formula

(\(cos \alpha \)= \({w^Tx\over \parallel w \parallel \parallel x \parallel}\))

Keeping this picture in mind let us revisit the algorithm

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(\text w\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

Obviously, less than \(90^\circ\)

Obviously, greater than \(90^\circ\)

\text w ^ \text T \text x = 0

Algorithm: Perceptron Learning Algorithm

\(P \gets\) \(inputs\) \(with\) \(label\)  \(1\);

\(N \gets\) \(inputs\) \(with\) \(label\)  \(0\);

Initialize \(\text w\) randomly;

while \(!convergence\)  do

Pick random \(\text x\) \(\isin\) \( P\)  \(\cup\) \( N\)  ;

\(\text w\) = \(\text w\) \(+\) \(\text x\) ;

end

end

//the algorithm converges when all the inputs
    are classified correctly

if \(\text x\) \(\isin\) \(\text P\)  \(and\)  \(\sum_{i=0}^{n}\ w_i*x_i  < 0\) then

end

if \(\text x\) \(\isin\) \(\text N\)  \(and\)  \(\sum_{i=0}^{n}\ w_i*x_i  \geq 0\) then

\(\text w\) = \(\text w\) \(-\) \(\text x\) ;

For \(\text x\) \(\isin\) \( P\) if \(\text w^\text T \text x < 0\) then it means that the angle (\(\alpha\)) between this \(\text x\) and the current \(\text w\) is greater than \(90^\circ\)

What happens to the new angle (\(\alpha_{new}\)) when \(\text w_{\text {new}} = \text w + \text x\)

\(cos\)(\(\alpha_{new}\)) \(\varpropto\) (\(\text w_{\text {new}})^\text T \text x\)

\(\varpropto (\text w+\text x)^\text T \text x\)

\(\varpropto \text w^\text T \text x + \text x^\text T \text x\)

\(\varpropto cos \alpha + \text x^\text T \text x\)

\(cos\)(\(\alpha_{new}\)) \(> cos \alpha\)

Thus \(\alpha_{new}\) will be less than \(\alpha\) and this is exactly what we want

(\(cos \alpha \)= \({w^Tx\over \parallel w \parallel \parallel x \parallel}\))

                                                                                                                                                                            (but we want to be less than \(90^\circ\))

Algorithm: Perceptron Learning Algorithm

\(P \gets\) \(inputs\) \(with\) \(label\)  \(1\);

\(N \gets\) \(inputs\) \(with\) \(label\)  \(0\);

Initialize \(\text w\) randomly;

while \(!convergence\)  do

Pick random \(\text x\) \(\isin\) \( P\)  \(\cup\) \( N\)  ;

\(\text w\) = \(\text w\) \(+\) \(\text x\) ;

end

end

//the algorithm converges when all the inputs
    are classified correctly

if \(\text x\) \(\isin\) \(\text P\)  \(and\)  \(\sum_{i=0}^{n}\ w_i*x_i  < 0\) then

end

if \(\text x\) \(\isin\) \(\text N\)  \(and\)  \(\sum_{i=0}^{n}\ w_i*x_i  \geq 0\) then

\(\text w\) = \(\text w\) \(-\) \(\text x\) ;

For \(\text x\) \(\isin\) \( N\) if \(\text w^\text T \text x \geq 0\) then it means that the angle (\(\alpha\)) between this \(\text x\) and the current \(\text w\) is less than \(90^\circ\)

What happens to the new angle (\(\alpha_{new}\)) when \(\text w_{\text {new}} = \text w - \text x\)

\(cos\)(\(\alpha_{new}\)) \(\varpropto\) (\(\text w_{\text {new}})^\text T + \text x\)

\(\varpropto (\text w-\text x)^\text T \text x\)

\(\varpropto \text w^\text T \text x - \text x^\text T \text x\)

\(\varpropto cos \alpha - \text x^\text T \text x\)

\(cos\)(\(\alpha_{new}\)) \(< cos \alpha\)

Thus \(\alpha_{new}\) will be greater than \(\alpha\) and this is exactly what we want

(\(cos \alpha \)= \({w^Tx\over \parallel w \parallel \parallel x \parallel}\))

                                                                                                                                                                        (but we want to be greater than \(90^\circ\))

We will now see this algorithm in action for a toy dataset

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(p_1\)), apply correction \(\text w = \text w + \text x\) (\(\because\) \(\text w \sdot \text x < 0\)) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(p_2\)), apply correction \(\text w = \text w + \text x\) (\(\because\) \(\text w \sdot \text x < 0\)) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(n_1\)), apply correction \(\text w = \text w - \text x\) (\(\because\) \(\text w \sdot \text x \geq 0\)) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(n_3\)), no correction
needed \(\because\ \text w \sdot \text x < 0\) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(n_2\)), no correction
needed \(\because\ \text w \sdot \text x < 0\) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(p_3\)), apply correction \(\text w = \text w + \text x\) (\(\because\) \(\text w \sdot \text x < 0\)) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(p_1\)), no correction
needed \(\because\ \text w \sdot \text x \geq 0\) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(p_2\)), no correction
needed \(\because\ \text w \sdot \text x \geq 0\) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(n_1\)), no correction
needed \(\because\ \text w \sdot \text x < 0\) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(n_3\)), no correction
needed \(\because\ \text w \sdot \text x < 0\) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(n_2\)), no correction
needed \(\because\ \text w \sdot \text x < 0\) (you can check the angle visually) 

\(x_2\)

\(x_1\)

\(p_2\)

\(p_3\)

\(p_1\)

\(n_1\)

\(n_2\)

\(n_3\)

We initialized \(\text w\) to a random value

We observe that currently, \(\text w \sdot \text x < 0\) (\(\because\) angle \(\alpha > 90^\circ\) for all the positive points and \(\text w \sdot \text x \geq 0\) (\(\because\) angle \(\alpha < 90^\circ\) for all the negative points (the situation is exactly opposite of what we actually want it to be)

We now run the algorithm by randomly going over the points

Randomly pick a point (say, \(p_3\)), no correction
needed \(\because\ \text w \sdot \text x \geq 0\) (you can check the angle visually) 

\(x_2\)

\(x_1\)