Closed ZeChArtiahSaher closed 7 months ago
Thanks for the report, I'll look into it.
I may have found the issue, and will experiment with some fixes.
Are you using the master branch or portable build?
Cloned master yesterday. Windows, virtualenv w/ python 3.10.11. Maybe I should be using 3.10.9? :D
yeah no , same thing on 3.10.9
Are you using any sort of program or software for the microphone?
You'll need to make the script is designated to a microphone with the --set_microphone #
command.
Example python "transcribe_audio.py" --ram 12gb --set_microphone 8 --non_english --translate
will set it to nvidia broadcast best results are lower IDs of the device you want.
I'm unable to replicate the issue via code or force replicate via code, the given error seems to be that your microphone is out of range for the program to pick up.
I got a loopback off an ssl 2+ audio interface. I suspect it's not the problem cuz again, audio streaming works just fine with small/medium whisper models.
-------- Original Message -------- On Nov 19, 2023, 20:02, Joe K. wrote:
Are you using any sort of program or software for the microphone?
You'll need to make the script is designated to a microphone with the --set_microphone # command.
Example python "transcribe_audio.py" --ram 12gb --set_microphone 8 --non_english --translate will set it to nvidia broadcast best results are lower IDs of the device you want.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
In your model's folder do you have large-v2.pt
?
Large-v3 actually so I'm suppose to be on v2?
Could you download the latest dev build and see if this fixes the issue
yeah that fixes, also so much faster, idk if that's the model or just the branch
Thanks for testing.
I get something along the lines of
Args used:
However it works with 4gb model. 3090 rtx so not a vram issue