Prebuilt wheels for PyTorch packages with custom ops
I've created a repository that can build PyTorch wheels with custom ops through the GitHub Actions pipeline and publish them using GitHub Releases. Check it out at https://github.com/MiroPsota/torch_packages_builder.
Since there are various ways how to use it, please refer to the repository README for more information.
If you prefer own build or can't trust a 3rd party repository, feel free to fork it and build any package/version/commit ID you desire yourself.
No Support for Pip Cache: pip relies on http cache, and GitHub generates on-the-fly redirections for release links, so they are probably not playing nicely together. I recommend hosting it yourself.
Where <compute_platform> is, as in PyTorch, one of cpu, cu<CUDA_short_version> (e.g. cu121, cu118, cu102), or rocm<ROCM_version> (not supported right now).
For example, the newest fairseq commit (as of writing) d9a6270, PyTorch 2.3.1 with CUDA 12.1:
I haven't built other combinations as of writing, I will probably build occasionally with new pytorch releases and versions/commits.
These wheels are built with PyTorch versions 1.11.0 to 2.3.1 and their respective compute platforms and supported operating systems. Please note an exception for cu102 on Windows (due to no VS 2017 on the GitHub windows-2019 runner) and the ROCm platform. The build is done using the ubuntu-20.04 runner, so older Linux distributions might not work due to old libc.
Although the wheels have been successfully built, I do not guarantee they work correctly for all combinations (let me know if not).
If you've installed PyTorch with pip, there's no need to have CUDA installed on your system, as the PyTorch wheels for pip bundle CUDA.
Prebuilt wheels for PyTorch packages with custom ops
I've created a repository that can build PyTorch wheels with custom ops through the GitHub Actions pipeline and publish them using GitHub Releases. Check it out at https://github.com/MiroPsota/torch_packages_builder.
Since there are various ways how to use it, please refer to the repository README for more information.
If you prefer own build or can't trust a 3rd party repository, feel free to fork it and build any package/version/commit ID you desire yourself.
fairseq specific quick info:
Install using pip:
Where
<compute_platform>
is, as in PyTorch, one ofcpu
,cu<CUDA_short_version>
(e.g.cu121
,cu118
,cu102
), orrocm<ROCM_version>
(not supported right now).For example, the newest fairseq commit (as of writing)
d9a6270
, PyTorch 2.3.1 with CUDA 12.1:I haven't built other combinations as of writing, I will probably build occasionally with new pytorch releases and versions/commits.
These wheels are built with PyTorch versions
1.11.0
to2.3.1
and their respective compute platforms and supported operating systems. Please note an exception for cu102 on Windows (due to no VS 2017 on the GitHubwindows-2019
runner) and the ROCm platform. The build is done using theubuntu-20.04
runner, so older Linux distributions might not work due to old libc.Although the wheels have been successfully built, I do not guarantee they work correctly for all combinations (let me know if not).
If you've installed PyTorch with pip, there's no need to have CUDA installed on your system, as the PyTorch wheels for pip bundle CUDA.