Open ajayvohra2005 opened 3 months ago
@zpcore can you take a look at this one? I suspect you can repo with the CPU as well.
@JackCaoG , yes, the issue can be reproduced with the xla CPU also. Meanwhile, I tried the same code with the master branch, the issue doesn't exist. So the issue only exists on the 2.3 release.
The simplest solution is to use the most latest docker build us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.10_cuda_12.1_20240605
. @ajayvohra2005 , can you try this docker instead?
🐛 Bug
Using
torch.repeat
leads to runtime error:To Reproduce
Steps to reproduce the behavior:
Docker Image:
Python script to reproduce error:
Expected behavior
Script should run without error
Environment
us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:r2.3.0_3.10_cuda_12.1
Additional context
Using
torch.expand
is a work around.