___________________________________________________________ TestModelOutput.test_HF_bert_model_output ___________________________________________________________
Traceback (most recent call last):
File "/home/ubuntu/pytorch/test/dynamo/test_model_output.py", line 232, in test_HF_bert_model_output
sequence_output = torch.rand(1, 12, 768).to("cuda")
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
To execute this test, run the following from the base repo dir:
python test/dynamo/test_model_output.py -k test_HF_bert_model_output
Versions
platform: aarch64 linux (without cuda )
OS: Ubuntu 22.04
torch mainline
🐛 Describe the bug
CI test
dynamo\test_model_output.py
is failing on aarch64 platform because the device is hardcoded to cuda in one of the subtests.Reproducer:
python test/dynamo/test_model_output.py -k test_HF_bert_model_output
Error log:
Versions
platform: aarch64 linux (without cuda ) OS: Ubuntu 22.04 torch mainline
cc @ezyang @msaroufim @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng