mrdbourke / pytorch-deep-learning

Materials for the Learn PyTorch for Deep Learning: Zero to Mastery course.
https://learnpytorch.io
MIT License
10.49k stars 3.1k forks source link

Notebook 03: Different shape outputs between PyTorch 1.10, 1.11, 1.12 #71

Open mrdbourke opened 2 years ago

mrdbourke commented 2 years ago

Hi all,

With the pytorch version 1.11 (and 1.12), the trick that Daniel uses (hidden_units*7*7) doesn;'t work. It worked I believe because the output in 1.10 of Conv_layer_2 =[1,10,7,7]. Multiplying each unit 10*7*7 = 490 and delivers [1,490] and thus solving this by using hidden_units*7*7 works in 1.10.

In 1.11 and 1.12, the output of conv_layer_2 is however is [10, 7, 7], leading to 7*7 and a size of [10*49]. Hence, you cannot solve the input by doing hidden*7*7 (results in 490) but rather, simply 7*7.

thus the linear layer becomes:

nn.Linear(in_features=7*7, out_features=output_shape)

Using this the shapes match and it will work on a single image,

Yet when training you will need the hidden*7*7 setup as it wont work otherwise.

Originally posted by @aronvandepol in https://github.com/mrdbourke/pytorch-deep-learning/discussions/68

aronvandepol commented 2 years ago

I've run into something similar in 03. Excersises - Running single dummy tensors torch.rand(size=(1, 28, 28)).unsqueeze(dim=0) as you did in the solutions caused shape errors for me (again 49 vs 490) but did work in the training/test loops. Unsure where this comes from.. Could also be me misunderstanding something with shapes? still learning after all.

Note: model architecture was completely the same, even when running your code it prompted shape error on single tensors.

cm-awais commented 1 year ago

Hello, I ran into same issue, then I gave model.unsqueeze(dim=0) as suggested by aronvandepol, can someone explain what is the reason of such a problem? Thanks.

jlecomte commented 1 year ago

So, I just ran into the issue and spent a bit of time figuring out what was going on. Turns out, it's relatively simple, but it became simple only after banging my head against the keyboard a hundred times LOL! Basically, the key thing to understand is that the model is designed to work only with BATCHES! So, if you feed it a batch, basically a tensor of shape [<batch_size>, <color_channels>, <height>, <width>], e.g., [32, 1, 28, 28], everything just works, and the output of the model is a tensor of shape [<batch_size>, <number_of_classes>], e.g., [32, 10] in this notebook. Now, if you want to do a prediction on a single image, i.e. a tensor of shape [<color_channels>, <height>, <width>], it's not going to work. So, the trick is to artificially add a dimension to our image tensor before feeding it to our model. Basically, we just turn our image tensor into a batch of 1 image, i.e. a tensor of shape [1, 1, 28, 28]. We do that with:

image.unsqueeze(0)

Now, the output of the model will be a batch containing 1 prediction, i.e. a tensor of shape [1, 10]. In inference mode, use squeeze() to remove the first dimension. So, here is the code:

model_X.to(device)
model_X.eval()
with torch.inference_mode():
  prediction = model_X(image.unsqueeze(0).to(device)).squeeze()
prediction

And the output will be a tensor of shape [10], and you can use argmax(dim=1) to get the index of the class with the highest probability.

Hopefully, this clarifies things. I think this issue can be closed (unless I missed something obvious, of course) but I do think that the course material may need to be updated to clarify this a bit.

Kashish-1426 commented 1 year ago

So, I just ran into the issue and spent a bit of time figuring out what was going on. Turns out, it's relatively simple, but it became simple only after banging my head against the keyboard a hundred times LOL! Basically, the key thing to understand is that the model is designed to work only with BATCHES! So, if you feed it a batch, basically a tensor of shape [<batch_size>, <color_channels>, <height>, <width>], e.g., [32, 1, 28, 28], everything just works, and the output of the model is a tensor of shape [<batch_size>, <number_of_classes>], e.g., [32, 10] in this notebook. Now, if you want to do a prediction on a single image, i.e. a tensor of shape [<color_channels>, <height>, <width>], it's not going to work. So, the trick is to artificially add a dimension to our image tensor before feeding it to our model. Basically, we just turn our image tensor into a batch of 1 image, i.e. a tensor of shape [1, 1, 28, 28]. We do that with:

image.unsqueeze(0)

Now, the output of the model will be a batch containing 1 prediction, i.e. a tensor of shape [1, 10]. In inference mode, use squeeze() to remove the first dimension. So, here is the code:

model_X.to(device)
model_X.eval()
with torch.inference_mode():
  prediction = model_X(image.unsqueeze(0).to(device)).squeeze()
prediction

And the output will be a tensor of shape [10], and you can use argmax(dim=1) to get the index of the class with the highest probability.

Hopefully, this clarifies things. I think this issue can be closed (unless I missed something obvious, of course) but I do think that the course material may need to be updated to clarify this a bit.

Thank you for making things simple as i was about to post the same question.