ahmetoner / whisper-asr-webservice

OpenAI Whisper ASR Webservice API
https://ahmetoner.github.io/whisper-asr-webservice
MIT License
1.86k stars 332 forks source link

Non Base models crash with GPU #134

Closed Amateur-God closed 9 months ago

Amateur-God commented 9 months ago

if i try to run the GPU Version with any models other than base the docker container crashes after several minutes with exit code 3

im running this on a DELL R620 with Nvidia Quadro P1000

however if i run the base model, everything works fine

ive tried this with both the openai_whisper and faster_whisper as the engine

ayancey commented 9 months ago

Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?

Amateur-God commented 9 months ago

Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?

I'm not too familiar with docker, I'm use to running stuff on bare metal VMs, where would I find the logs for the docker?

ayancey commented 9 months ago

Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?

I'm not too familiar with docker, I'm use to running stuff on bare metal VMs, where would I find the logs for the docker?

Run docker logs <container name>

Amateur-God commented 9 months ago

Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?

I'm not too familiar with docker, I'm use to running stuff on bare metal VMs, where would I find the logs for the docker?

Run docker logs <container name>

oh i deleted the container when it kept crashing, when the current batch of subs have finnished proccessing i shall create a new one and give it a shot

also just a quick question, if i added another GPU to the server would it utilise the ram of both cards allowing me to use larger models or would i still only be able to use the small model?

ayancey commented 9 months ago

Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?

I'm not too familiar with docker, I'm use to running stuff on bare metal VMs, where would I find the logs for the docker?

Run docker logs <container name>

oh i deleted the container when it kept crashing, when the current batch of subs have finnished proccessing i shall create a new one and give it a shot

also just a quick question, if i added another GPU to the server would it utilise the ram of both cards allowing me to use larger models or would i still only be able to use the small model?

No that isn’t possible sorry.