SYSTRAN / faster-whisper

Faster Whisper transcription with CTranslate2
MIT License
11.77k stars 979 forks source link

OOM on 8GB GPU #564

Open BoneGoat opened 10 months ago

BoneGoat commented 10 months ago

Your Readme states that running large-v2 on GPU with int8 precision requires max 3091MB VRAM. I'm running those settings on an 8GB GPU and getting OOM.

Is there something going on with restoring timestamps?

Running on Ubuntu so no WSL.

Have reduced best_of to 3 as suggested by a previous issue and it seems to run better. Still think it's a strange behaviour.

File "/home/segmenter/transcribe_pipeline.py", line 33, in transcribe_pipeline 2023-11-16T14:06:04.185153226Z transcription_result = list(transcription_result) 2023-11-16T14:06:04.185155348Z File "/home/segmenter/.local/lib/python3.8/site-packages/pyAudioAnalysis/../faster_whisper/transcribe.py", line 922, in restore_speech_timestamps 2023-11-16T14:06:04.185157641Z for segment in segments: 2023-11-16T14:06:04.185162595Z File "/home/segmenter/.local/lib/python3.8/site-packages/pyAudioAnalysis/../faster_whisper/transcribe.py", line 433, in generate_segments 2023-11-16T14:06:04.185164739Z ) = self.generate_with_fallback(encoder_output, prompt, tokenizer, options) 2023-11-16T14:06:04.185166899Z File "/home/segmenter/.local/lib/python3.8/site-packages/pyAudioAnalysis/../faster_whisper/transcribe.py", line 641, in generate_with_fallback 2023-11-16T14:06:04.185168970Z result = self.model.generate( 2023-11-16T14:06:04.185170988Z RuntimeError: CUDA failed with error out of memory

jerome83136 commented 6 months ago

Hi, I also have Out Of Memory error running the medium-int8 model on a 6GB GPU

Here is how I create the container: docker run -d --gpus all --runtime=nvidia --name=faster-whisper --privileged=true -e WHISPER_BEAM=10 -e WHISPER_LANG=fr -e WHISPER_MODEL=medium-int8 -e NVIDIA_DRIVER_CAPABILITIES=all -e NVIDIA_VISIBLE_DEVICES=all -p 10300:10300/tcp -v /mnt/docker/data/faster-whisper/:/config:rw ghcr.io/linuxserver/lspipepr-faster-whisper:gpu-version-1.0.1

Here is the error:

INFO:__main__:Ready
[ls.io-init] done.
INFO:wyoming_faster_whisper.handler: Allume la cuisine.
INFO:wyoming_faster_whisper.handler: Éteins la cuisine !
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-14' coro=<AsyncEventHandler.run() done, defined at /lsiopy/lib/python3.10/site-packages/wyoming/server.py:28> exception=RuntimeError('CUDA failed with error out of memory')>
Traceback (most recent call last):
  File "/lsiopy/lib/python3.10/site-packages/wyoming/server.py", line 35, in run
    if not (await self.handle_event(event)):
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/handler.py", line 75, in handle_event
    text = " ".join(segment.text for segment in segments)
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/handler.py", line 75, in <genexpr>
    text = " ".join(segment.text for segment in segments)
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 162, in generate_segments
    for start, end, tokens in tokenized_segments:
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 186, in generate_tokenized_segments
    result, temperature = self.generate_with_fallback(segment, prompt, options)
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 279, in generate_with_fallback
    result = self.model.generate(
RuntimeError: CUDA failed with error out of memory

It works at the begining; but after ~1hour of inactivity I get the OOM error. (and strangely nvidia-smi reports the process using 1,3GB only) I'm running it on a Linux host (Debian 12)

Thank you for your help Best regards