Solve with Instructors

Week - 1

Arun Prakash A

1.Density Estimation

\( L = - \sum \log P(x_i) \)

1. Naviraa goes for a walk on a daily basis and records the number of steps he has covered each day using a pedometer. The following table shows the recorded data for a week. He wants to know how many steps he can cover the next day. Which ML model is more suitable here?

Day Steps
05.09.2021 5800
06.09.2021 5945
07.09.2021 4880
08.09.2021 6120
09.09.2021 6430
10.09.2021 4640
11.09.2021 5980
12.09.2021 ?

2.  A behaviour analyst decided to study the emotional state of his spouse at the end of each day. He decided to observe and record various events (Features) that happen to her on a daily basis. The table below shows the data collected over a week. Which ML model would you suggest him?

Day Gone for Shopping Housemaid Present Gone for walking State of Emotion
05.09.2021 Yes Yes Yes Happy
06.09.2021 Yes No Yes Neutral
07.09.2021 Yes No No Anger
08.09.2021 No Yes Yes Happy
09.09.2021 No Yes No Neutral
10.09.2021 Yes No No Happy
11.09.2021 No No No Anger
12.09.2021 Yes Yes No ?

3.Consider the following table that contains data points and their corresponding labels. 

  • Plot these data points in a cartesian coordinate system.
S.No x1 x2 y
1 0 0 0
2 0 1 1
3 1 0 1
4 1 1 1

\(x_1\)

(1,0)

(0,1)

(1,1)

(0,0)

3.Consider the following table that contains data points and their corresponding labels. 

  • Plot these data points in a cartesian coordinate system.
S.No x1 x2 y
1 0 0 0
2 0 1 1
3 1 0 1
4 1 1 1

\(x_1\)

(1,0)

(0,1)

(1,1)

(0,0)

  • Which ML  model is more suitable here?

Classification

S.No x1 x2 y
1 0 0 0
2 0 1 1
3 1 0 1
4 1 1 1

\(0.5x_1+0.5x_2+0.5=0\)

  • Plot those data points in a cartesian coordinate system.
  • Which ML model is more suitable here?
  •  Come up with a linear separator  and initialize the parameter values to (0.5,0.5,0.5).

\(x_1\)

\(x_2\)

(1,0)

(0,1)

(1,1)

(0,0)

S.No x1 x2 y  y~
1 0 0 0 1
2 0 1 1 1
3 1 0 1 1
4 1 1 1 1

\(0.5x_1+0.5x_2+0.5=0\)

\(x_1\)

\(x_2\)

(1,0)

(0,1)

(1,1)

(0,0)

  • Make predictions with the \( sign( \cdot)\) function

  • Compute the squared error loss
  • Is the linear model good enough?
S.No x1 x2 y  y~ SE
1 0 0 0 1 1
2 0 1 1 1 0
3 1 0 1 1 0
4 1 1 1 1 0

*SE = Squared Error

Loss = \( \frac{1}{4} \cdot 1 \) = 0.25

\(0.5x_1+0.5x_2+0.5=0\)

\(x_1\)

\(x_2\)

(1,0)

(0,1)

(1,1)

(0,0)

Compared to what?

4. Look at the graph below and answer the following questions

Recognise the type of ML problem from the graph.

Regression. (How?)

4. Look at the graph below and answer the following questions

Which of these regression lines is the best one (in the MSE sense)

\( y = x+1\)

\( y = 0.8x+0.8\)

Can't answer just by looking at the graphs.

Compute squared error loss for both these functions.

Labels/Ground Truth Predicted
1 1
2 4
3 1
4 4
1 4
2 2
3 3
4 1
1 1
1 1
1 1

5.The table below shows the original labels and predicted labels (classes) of some Multiclass classification problem. Compute the squared error loss and 0-1 loss. Which loss function seems to be a good one?

Labels/Ground Truth Predicted Squared
Difference
0-1
1 1 0 0
2 4 4 1
3 1 4 1
4 4 0 0
1 4 9 1
2 2 0 0
3 3 0 0
4 1 9 1`
1 1 0 0
1 1 0 0
1 1 0 0

5.The table below shows the original labels and predicted labels (classes) of some Multiclass classification problem. Compute the squared error loss and 0-1 loss. Which loss function seems to be a good one?

SE : \(\frac{1}{11} \cdot 22\) = 2

0-1 : \(\frac{1}{11} \cdot 4\) = 0.363 (36 % misclassified)

6. Consider an encoder \(\mathbf{W}\) and decoder \(\mathbf{W^T}\)  functions given below

\(\mathbf{W} = \begin{bmatrix} 1,0,0,0 \\ 0,1,0,0 \end{bmatrix}\)

\(\mathbf{W^T} = \begin{bmatrix} 1,0 \\ 0,1 \\0,0 \\0,0 \end{bmatrix}\)

Compress the data point \(\mathbf{x}=[1,2,3,4]^T\) to obtain \(\mathbf{u}\) and reconstruct  \(\mathbf{x'}\) from \(\mathbf{u}\)

How close the reconstruction is to the original?

6. Consider an encoder \(\mathbf{W}\) and decoder \(\mathbf{W^T}\)  functions given below

\(\mathbf{W} = \begin{bmatrix} 1,0,0,0 \\ 0,1,0,0 \end{bmatrix}\)

\(\mathbf{W^T} = \begin{bmatrix} 1,0 \\ 0,1 \\0,0 \\0,0 \end{bmatrix}\)

Compress a data point \(\mathbf{x}=[1,2,3,4]^T\) to obtain \(\mathbf{u}\) and reconstruct  \(\mathbf{x'}\) from \(\mathbf{u}\)

How close the reconstruction is to the original?

\( u = Wx = \begin{bmatrix}1 \\2 \end{bmatrix}\)

\( x' = W^Tu = \begin{bmatrix}1 \\2 \\0 \\0\\ \end{bmatrix}\)

L = \left\lVert \begin{bmatrix}1 \\2 \\0 \\0\\ \end{bmatrix} - \begin{bmatrix}1 \\2 \\3 \\4\\ \end{bmatrix}\right\rVert^2 = 25

MLF_SWI_1

By Arun Prakash