Closed blundercode closed 7 months ago
Can you double check that the latest Docker image was used? If so, maybe a mistake was made when building and publishing the newest image. Scratch that, maybe an issue with the poetry lock?
You can always build the image yourself to test. I'll be doing this in a couple hours when I am home.
Can you double check that the latest Docker image was used? If so, maybe a mistake was made when building and publishing the newest image.
You can always build the image yourself to test. I'll be doing this in a couple hours when I am home.
It does appear to be pulling from the latest I ran docker pull onerahmet/openai-whisper-asr-webservice
says its on latest and also shouldnt onerahmet/openai-whisper-asr-webservice:latest-gpu
part of the docker run command always reference the latest docker hub build?
Good idea I will give it a shot building myself now.
also shouldnt
onerahmet/openai-whisper-asr-webservice:latest-gpu
part of the docker run command always reference the latest docker hub build?
Yep, just sanity checking. Sorry, this is wrong. Like Ahmet said, you need to pull the latest image every time you want to restart the container. Use docker pull
or docker compose pull
depending on which you're using.
Building the image locally from the GitHub repo worked for me!
Maybe its something wrong with the docker hub image?
Excited to test out V3 stuff!
@blundercode please ensure that you pull the image before running the command; otherwise, it will run the cached image that you pulled previously.
docker pull onerahmet/openai-whisper-asr-webservice:latest-gpu
@ahmetoner
Alright that fixed it I was running docker pull onerahmet/openai-whisper-asr-webservice
I guess I needed the :latest-gpu
tag on the end of it good know. My bad sorry for the false alarm!
I was just using the default docker pull onerahmet/openai-whisper-asr-webservice
command on the dockerhub. Might be worth updating the docs to idiot-proof it so people like me don't bother you haha
Thanks for the updates and quick responses from both of you
When running:
docker run -it --gpus all -p 9000:9000 -e ASR_MODEL=large-v3 -e ASR_ENGINE=openai_whisper onerahmet/openai-whisper-asr-webservice:latest-gpu
It does not accept large-v3 as a parameter. And outputs this error:
When using just
large
it also appears to only be downloading v2 still and not v3.Also would like to test the faster-whisper v3 integration but when I run the large-v3 option with faster_whisper it appears to only downloads the large-v2.pt
Thoughts? Am I running something incorrectly?