mrdbourke / pytorch-deep-learning

Materials for the Learn PyTorch for Deep Learning: Zero to Mastery course.
https://learnpytorch.io
MIT License
11.14k stars 3.27k forks source link

Decision boundaries not being plotted #1051

Open g-abilio opened 3 months ago

g-abilio commented 3 months ago

Problem:

Using the plot_decision_boundary function at the helper_functions script module, the decision boundaries are not being plotted, which is resulting in the following final plot:

Screen Shot 2024-08-21 at 15 40 52

Reproduction:

To reproduce this code, you just have to follow the steps on the guide website.

mrdbourke commented 3 months ago

Hi @g-abilio ,

I just ran the code from here: https://www.learnpytorch.io/02_pytorch_classification/#2-building-a-model

In Google Colab and got the following result:

Screenshot 2024-08-22 at 8 51 46 PM

Which part of the code are you running to get that issue?

Are you running this code to get the plot_decision_boundary function?

import requests
from pathlib import Path 

# Download helper functions from Learn PyTorch repo (if not already downloaded)
if Path("helper_functions.py").is_file():
  print("helper_functions.py already exists, skipping download")
else:
  print("Downloading helper_functions.py")
  request = requests.get("https://raw.githubusercontent.com/mrdbourke/pytorch-deep-learning/main/helper_functions.py")
  with open("helper_functions.py", "wb") as f:
    f.write(request.content)

from helper_functions import plot_predictions, plot_decision_boundary
g-abilio commented 3 months ago

Hi, @mrdbourke! Thanks for the response.

The first decision boundary plot (at the topic 4) is the one that is giving raise to this problem. I used the function plot_decision_boundary imported from helper_functions module normally, as I downloaded the module and placed it in a subdirectory at my project.

For greater knowledge of the situation, I obtained a static loss (both for training and testing) and a static accuracy (again, for both training and testing) of 0.5 (50 %), which is close to the expected behavior. However, these metric didn't change at all, they remained absolutely static for 1000 epochs. I don't know if this can impact on the plot, but, to my knowledge, 50% should let into a decision boundary described by a line that separates the dots into two areas evenly.

mrdbourke commented 3 months ago

Hey @g-abilio ,

Hmmm this is strange.

I'm not 100% sure what might be happening. If you are running similar code to the notebook/videos the loss metrics should eventually go down (I check these codes/models regularly).

Have you managed to figure out what might be the issue?

Or did you manage to fix your decision boundary plot?

pritesh2000 commented 3 months ago

Hey @g-abilio,

Can you provide a code link to your colab notebook or any other notebook ?

It is always better to double-check.

darshilmistry commented 1 month ago

hello @pritesh2000,

I have been facing the same issue as @g-abilio. Please check my notebook in my repository.

https://github.com/darshilmistry/PyTorchTutorials/blob/main/ClasssificationTutorial.ipynb

update:

I had some problem with my computer(which is quiet bad to be honest😅), so I had to restart it and reeopen and re--run the notebook from scratch, After I did this, I saw that the graphs we are speaking of were correctly plotted. So when I tried to re-run the notebook, I un-fixed the graph and it was again the same old problem.

I then started changing the random states but noting worked particularly well.

Finally I increased the epochs and I saw some change.

So my working theory is that increasing the epochs while training the model does makes the chart as expected. I had to train for 500 epochs after which the decision boundary almost intersects with the center of the circles.

pritesh2000 commented 1 month ago

hello @pritesh2000,

I have been facing the same issue as @g-abilio. Please check my notebook in my repository.

https://github.com/darshilmistry/PyTorchTutorials/blob/main/ClasssificationTutorial.ipynb

update:

I had some problem with my computer(which is quiet bad to be honest😅), so I had to restart it and reeopen and re--run the notebook from scratch, After I did this, I saw that the graphs we are speaking of were correctly plotted. So when I tried to re-run the notebook, I un-fixed the graph and it was again the same old problem.

I then started changing the random states but noting worked particularly well.

Finally I increased the epochs and I saw some change.

So my working theory is that increasing the epochs while training the model does makes the chart as expected. I had to train for 500 epochs after which the decision boundary almost intersects with the center of the circles.

I run provided notebook in colab and result was like this

image

Are you using jupyter or colab ?

g-abilio commented 1 month ago

Hi, @pritesh2000 and @mrdbourke!

I've actually realized that the problem related to my boundary plot is that I was not passing the logits as an argument to the loss function... I was rounding it and passing this processed output as an argument, explaining this weird behavior that I was acknowledging. Fixing this problem and passing the actual logits was the solution for the behavior.

@darshilmistry seems to have no errors in his training, as the correct boundary plot is being displayed, which accompanies a good final accuracy after the introduction of non-linearity. The solution was, probably, as @darshilmistry stated, increasing the number of epochs, I guess.