Closed msaroufim closed 1 year ago
Thanks for the feedback, I understand this is a bit confusing. We can't put our wheel in pypi because the wheel size is too big(we might be able to fix that after next release). neuron
is own by AWS and they publish wheel for their hardware aws trainium and aws inferentia.
In this PyTorch/XLA repo, we publish the wheel for TPU and GPU(both wheel also work with CPU). This is the master repo so we have release in sync with the pytorch. AWS folks usually have the wheel updated for the most recent after.
I will close this issue for now is there is no follow up questions. thanks for your feedback!
I was trying to test out
torch_xla
locally on GPU for torchserve so my first instinct was topip install torch_xla
which showed me a warning about how it was deprecated on pypi and how I should instead use pip.repos.neuron.amazonaws.com so I went there and found that the most recent version was 1.13 which seemed weird so I finally ended up here where I found 2.0