pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
83.71k stars 22.58k forks source link

dynamo test (test_model_output.py) failing on cpu devices because of cuda hardcoding for the device #125760

Open snadampal opened 6 months ago

snadampal commented 6 months ago

🐛 Describe the bug

CI test dynamo\test_model_output.py is failing on aarch64 platform because the device is hardcoded to cuda in one of the subtests.

Reproducer: python test/dynamo/test_model_output.py -k test_HF_bert_model_output

Error log:

___________________________________________________________ TestModelOutput.test_HF_bert_model_output ___________________________________________________________
Traceback (most recent call last):
 File "/home/ubuntu/pytorch/test/dynamo/test_model_output.py", line 232, in test_HF_bert_model_output
  sequence_output = torch.rand(1, 12, 768).to("cuda")
 File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init
  raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
To execute this test, run the following from the base repo dir:
   python test/dynamo/test_model_output.py -k test_HF_bert_model_output

Versions

platform: aarch64 linux (without cuda ) OS: Ubuntu 22.04 torch mainline

cc @ezyang @msaroufim @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng

snadampal commented 6 months ago

I've raised this PR to fix it: https://github.com/pytorch/pytorch/pull/125761