Traceback (most recent call last):
File "/Users/svekars/repositories/tutorials2/tutorials/beginner_source/text_sentiment_ngrams_tutorial.py", line 281, in <module>
train(train_dataloader)
File "/Users/svekars/repositories/tutorials2/tutorials/beginner_source/text_sentiment_ngrams_tutorial.py", line 208, in train
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
File "/Users/svekars/repositories/tutorials2/tutorials/venv/lib/python3.9/site-packages/torch/nn/utils/clip_grad.py", line 55, in clip_grad_norm_
norms.extend(torch._foreach_norm(grads, norm_type))
NotImplementedError: Could not run 'aten::_foreach_norm.Scalar' with arguments from the 'SparseCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_foreach_norm.Scalar' is only available for these backends: [CPU, MPS, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher]
Add Link
https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html
Describe the bug
Fails against 2.0 with the following error:
Link to CI: https://app.circleci.com/pipelines/github/pytorch/tutorials/7573/workflows/c05f4734-dcf0-4543-9b34-60fcf4153636/jobs/147996?invite=true#step-104-4015
Possible fix:
Describe your environment
torch: 2.0
cc @pytorch/team-text-core @Nayef211