pytorch / tutorials

PyTorch tutorials.
https://pytorch.org/tutorials/
BSD 3-Clause "New" or "Revised" License
8.23k stars 4.07k forks source link

[BUG] - Text Classification tutorial fails against 2.0 #2245

Closed svekars closed 1 year ago

svekars commented 1 year ago

Add Link

https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html

Describe the bug

Fails against 2.0 with the following error:

Traceback (most recent call last):
  File "/Users/svekars/repositories/tutorials2/tutorials/beginner_source/text_sentiment_ngrams_tutorial.py", line 281, in <module>
    train(train_dataloader)
  File "/Users/svekars/repositories/tutorials2/tutorials/beginner_source/text_sentiment_ngrams_tutorial.py", line 208, in train
    torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
  File "/Users/svekars/repositories/tutorials2/tutorials/venv/lib/python3.9/site-packages/torch/nn/utils/clip_grad.py", line 55, in clip_grad_norm_
    norms.extend(torch._foreach_norm(grads, norm_type))
NotImplementedError: Could not run 'aten::_foreach_norm.Scalar' with arguments from the 'SparseCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_foreach_norm.Scalar' is only available for these backends: [CPU, MPS, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher]

Link to CI: https://app.circleci.com/pipelines/github/pytorch/tutorials/7573/workflows/c05f4734-dcf0-4543-9b34-60fcf4153636/jobs/147996?invite=true#step-104-4015

Possible fix:

self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=False)

Describe your environment

torch: 2.0

cc @pytorch/team-text-core @Nayef211

Nayef211 commented 1 year ago

I think setting sparse=False should be fine as a fix