Closed Amateur-God closed 9 months ago
Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?
Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?
I'm not too familiar with docker, I'm use to running stuff on bare metal VMs, where would I find the logs for the docker?
Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?
I'm not too familiar with docker, I'm use to running stuff on bare metal VMs, where would I find the logs for the docker?
Run docker logs <container name>
Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?
I'm not too familiar with docker, I'm use to running stuff on bare metal VMs, where would I find the logs for the docker?
Run
docker logs <container name>
oh i deleted the container when it kept crashing, when the current batch of subs have finnished proccessing i shall create a new one and give it a shot
also just a quick question, if i added another GPU to the server would it utilise the ram of both cards allowing me to use larger models or would i still only be able to use the small model?
Your card has 4 GB of RAM so you should be able to run the small model as well. Can you send some logs?
I'm not too familiar with docker, I'm use to running stuff on bare metal VMs, where would I find the logs for the docker?
Run
docker logs <container name>
oh i deleted the container when it kept crashing, when the current batch of subs have finnished proccessing i shall create a new one and give it a shot
also just a quick question, if i added another GPU to the server would it utilise the ram of both cards allowing me to use larger models or would i still only be able to use the small model?
No that isn’t possible sorry.
if i try to run the GPU Version with any models other than base the docker container crashes after several minutes with exit code 3
im running this on a DELL R620 with Nvidia Quadro P1000
however if i run the base model, everything works fine
ive tried this with both the openai_whisper and faster_whisper as the engine