idiap / fast-transformers

Pytorch library for fast transformer implementations
1.65k stars 179 forks source link

Rename pyptoject.toml to pyproject.toml #78

Closed LoicGrobol closed 3 years ago

LoicGrobol commented 3 years ago

Sorry, I messed up the name in the previous PR, it should have been pyproject.toml !

LoicGrobol commented 3 years ago

Side note on wheels: I found this project https://github.com/OpenNMT/CTranslate2 managing to build wheels with cuda support on github actions. If I need some distraction this week I'll try to port their wheel building process to fast-transformers :-)

angeloskath commented 3 years ago

No worries the mess is on me that I was very quick to merge. I will check it out and merge probably later today.

Btw, our current github workflow is actually compiling the CUDA code I just didn't make a wheel since it is not very clear to me how we could handle different cuda versions and Pytorch versions etc. But I would more than welcome your PRs :-).

Cheers, Angelos

angeloskath commented 3 years ago

Hi again,

So I spent quite a bit of time on finding a solution for this, however I think pyproject.toml is not going to be useful for fast_transformers. Basically, when this config is present pip builds the project in a separate environment. Although this is generally good, it won't work in our case because the user needs to be able to control the pytorch version and the CUDA version that he has.

As an example, building from scratch in an environment with an older CUDA toolkit 10.1 completely fails at the moment (if the file was named correctly) because the default pytorch now uses the cuda toolkit 10.2. This is also why the automated tests failed for the PR (the CUDA toolkit available in Ubuntu is 10.1).

At the moment, I think that the best option forward is to actually revert the previous commit. Am I missing something? Sorry for not taking the time to look into that yesterday.

Looking forward to your answer.

Cheers, Angelos

LoicGrobol commented 3 years ago

Hi,

I don't thing having torch has a build dependency should be an issue, as long as it is not pinned to a specific version: if an user wants to use another version, they still have the option of installing that version manually before installing fast-transformers. I have just tested this locally and it seems to work, but I have only tested with pip while the github action uses conda, so maybe that's why it fails? I do not know conda very well :-)

As for distributing wheels targeting different pytorch/cuda versions, the best way currently is apparently to host a dedicated wheel repository, which is of course not practical except for very large projects, but maybe providing wheels targeting the stable release of Pytorch and its default cuda version could be manageable? Presumably that's what most users would want (I know I do :-))

Cheers,

L

angeloskath commented 3 years ago

The main problem for the tests is caused by the CUDA version. PyTorch 1.8 uses CUDA 10.2 and in Ubuntu 20.04 we have 10.1 in the package repositories. This means that the correct PyTorch should be used in order to build the extensions.

Thanks for all the info regarding python package distributions, I should definitely read up, I have a boatload to learn and I know first hand how slow the building of this project is and that I should do something about it.

If you also agree I will close this PR however.

Thanks again, Angelos