SYSTRAN / faster-whisper

Faster Whisper transcription with CTranslate2
MIT License
12.59k stars 1.05k forks source link

Update requirements.txt: latest release from ctranslate2 from 10-22-2024 breaks faster-whisper #1082

Closed kristopher-smith closed 4 weeks ago

kristopher-smith commented 1 month ago

latest release from ctranslate2 from 10-22-2024 breaks faster-whisper. See release history here: https://pypi.org/project/ctranslate2/4.4.0/#history

I have edited the requirements.txt file to remedy this and limited the range from 4.0, 4.4.0 which is the latest working version for ctranslate2 with faster-whisper instead of 4.0, 5

MahmoudAshraf97 commented 4 weeks ago

Thank you for your contribution, faster whisper can use ctranslate2==4.5.0 so there is no reason to bound the version, the problem is caused by a mismatched CuDNN version caused by pytorch either use ct2<4.5 along with torch<2.4 or ct2==4.5 along with torch>=2.4

MahmoudAshraf97 commented 4 weeks ago

I might've been quick to close this PR, the problem is more complicated than I expected, v4.5.0 works just fine with pip installed cudnn, but if you have a torch version where the cuda binaries are precompiled such as torch==2.5.0+cu121 or any version that ends with +cu12, this error comes up, the only solution is downgrade to v4.4.0 at the moment which is strange because it was compiled using cudnn 8.9

BBC-Esq commented 4 weeks ago

Here is the compatibility issue...resolving dependency issues sucks gonads...but here's my attempt:

Torch Versions and Supported CUDA

Torch Version Supported CUDA Versions
2.5.0 11.8, 12.1, 12.4
2.4.1 11.8, 12.1, 12.4
2.4.0 11.8, 12.1, 12.4
2.3.1 11.8, 12.1

cuDNN Versions and Supported CUDA

cuDNN Version Supported CUDA Versions
8.9.7 11 through 12.2
9.0.0 11 through 12.3
9.1.0 11 through 12.4
9.2.0 11 through 12.5
9.2.1 11 through 12.5
9.3.0 11 through 12.6
9.4.0 11 through 12.6
9.5.0 11 through 12.6

Based on the foregoing... 1) Ctranslate2 4.5.0 or higher requires cuDNN 9+ 2) cuDNN 9+ supports as low as torch 12.3. 3) However, torch does NOT have a wheel that support CUDA 12.3...only 12.1 and 12.4. Please check here for all permutations of CUDA versions supported...https://download.pytorch.org/whl/

Workaround

Unless/until faster-whisper fine-tunes the install procedure...I've outlined a workaround here:

https://github.com/SYSTRAN/faster-whisper/issues/1080#issuecomment-2429688038

However, it requires you to pip install the libraries using the --no-deps flag, which basically tells faster-whisper "do not install any dependencies that your library requires"...it tells torch "do not install any dependencies for your library". Then you have to retroactively install the correct versions.

In other words:

1) pip install faster-whisper --no-deps 2) Go back and install the dependencies that faster-whisper requires and the specific versions that are compatible with one another.

BTW, if anyone wants to know how f'd up it can sometimes get...here's my table for xformers...So try creating a program the takes all the compatibility nuances into account...like also using triton, flash attention etc.:

xformers Versions and Required Torch Versions

xformers Version Required Torch Version Notes
0.0.26.post1 2.3.0
0.0.27 2.3.0 Release notes mention confusingly that "some operations" "might" require Torch 2.4
0.0.27.post1 2.4.0
0.0.27.post2 2.4.0
0.0.28.post1 2.4.1 Non-post1 release was not successfully uploaded to PyPI
0.0.28.post2 2.5.0
BBC-Esq commented 4 weeks ago

To follow up...

The same holds true fur ctranslate2 AND ANY OTHER LIBRARIES YOU USE IN YOUR PROGRAM.

For example, if ctranslate2 installs a torch version that's incompatible with other dependencies, you'd need to use the --no-deps flag when using pip and then individually install its dependencies separately. It gets quite annoying...

MahmoudAshraf97 commented 4 weeks ago

As this is not going to be solvable, workarounds are in #1086 , Thanks @BBC-Esq and @kristopher-smith

kristopher-smith commented 4 weeks ago

Thanks for looking into this so promptly everybody.

May I suggest excluding the range to only include current or previous releases from other libraries in the requirements.txt file?

Having ctranslate2 up to version 5 means that whenever there is an update to that library faster-whisper will automatically load the latest.

This range could be extended after latest versions of dependencies have been tested properly.

MahmoudAshraf97 commented 4 weeks ago

The reason we can't do that is that the problem is not from faster whisper, it's an incompatibility between ctranslate2 and user specific cuda installation, so if we limit it to 4.4.0 all users that use torch 2.4.0 or higher will not be able to install it correctly

kristopher-smith commented 4 weeks ago

The reason we can't do that is that the problem is not from faster whisper, it's an incompatibility between ctranslate2 and user specific cuda installation, so if we limit it to 4.4.0 all users that use torch 2.4.0 or higher will not be able to install it correctly

Good point. This could be handled in the setup.py file maybe?

I am happy to take a crack at this with a pull request which handles your library compatibility matrix during install.