jhj0517 / Whisper-WebUI

A Web UI for easy subtitle using whisper model.
Apache License 2.0
1.43k stars 205 forks source link

Error transcribing file on line parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device #112

Open liaodong opened 8 months ago

liaodong commented 8 months ago

load models models/Whisper/faster-whisper Error transcribing file on line parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device /home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/torch/cuda/memory.py:303: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats. warnings.warn( Traceback (most recent call last): File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/queueing.py", line 501, in call_prediction output = await route_utils.call_process_api( File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/route_utils.py", line 253, in call_process_api output = await app.get_blocks().process_api( File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/blocks.py", line 1704, in process_api data = await anyio.to_thread.run_sync( File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread return await future File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, *args) File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/blocks.py", line 1460, in postprocess_data self.validate_outputs(fn_index, predictions) # type: ignore File "/home/ai/other/miniconda3/envs/whisper-webui/lib/python3.10/site-packages/gradio/blocks.py", line 1434, in validate_outputs raise ValueError( ValueError: An event handler (transcribe_file) didn't receive enough output values (needed: 2, received: 1).

jhj0517 commented 8 months ago

Hi, it seems to be a CUDA error. Can you try

nvcc --version

in the cmd and show the result? If you're using CUDA versions below 12.0, faster-whisper is not compatible.

liaodong commented 8 months ago

nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Tue_Jul_11_02:20:44_PDT_2023 Cuda compilation tools, release 12.2, V12.2.128 Build cuda_12.2.r12.2/compiler.33053471_0

It looks matched

jhj0517 commented 8 months ago

According to here, Your GPU architecture is incompatible with CUDA 12. Can you try reinstaliing torch

# After activate venv! 
pip install --force-reinstall torch --extra-index-url https://download.pytorch.org/whl/cu121

as the post says?

You have to activate the venv before reinstalling, you can activate the venv with

liaodong commented 8 months ago

I tried 1.12.1, and no matter what version, it's the same error. Maybe it is other reason?