Closed dgoryeo closed 8 months ago
I'm experiencing the same issue, it just gets stuck 'forever' on the whisperx.load_model("large-v2", device, compute_type=compute_type)
line.
My installation environment:
!pip install torch==2.0.0 torchaudio==2.0.1
!pip install git+https://github.com/m-bain/whisperx.git
+1 - facing the same issue
Quick fix:
from huggingface_hub.utils import _runtime
_runtime._is_google_colab = False
Don't know if something else will break if you are doing something colab dependent, but you can always turn on the flag after model is downloaded.
Related: https://github.com/huggingface/huggingface_hub/issues/1952
@iUnknownAdorn
Quick fix:
from huggingface_hub.utils import _runtime _runtime._is_google_colab = False
It worked, thank you very much👼👼👼
@iUnknownAdorn , thanks so much for the fix. It worked like a charm and no other issues so far.
It appears that whipserX has stopped working on Google Colab. The code does not pass beyond load_model(). Here is my code:
My installation environment is:
!pip install --no-cache-dir torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 torchtext torchdata --index-url https://download.pytorch.org/whl/cu118
Colab execution notification on the line [model = whisperx.load_model("large-v2", device, compute_type=compute_type) ]that it is stuck on:
I think this happend after a recent upgrade of Google Colab: Upgrade to Colab