Open OpenWaygate opened 1 year ago
+1 but I'd prefere whisper.cpp which also works on cpu
+1 but I'd prefere whisper.cpp which also works on cpu
That would be better, since i donot have a gpu
Any update to this is yet? Do we have native or langchain support for this already?
+1 over here, I'd love to be able to run whisper, especially on GPU!
Don't think I know enough about all this to create my own ModelFile from a pytorch model.
If possible I would like a smaller model as well. I don't know that much of this subject but shouldn't asmaller models be enough to detect keywords such as "Hi Alfred"? And then pipe the rest to something like Whisper.
faster-whisper would be even more interesting IMO. Since it is, uhm, faster.
As both Ollama and whisper.cpp are somehow related to llama.cpp, maybe whisper.cpp could be a good starting point. This is only me guessing, maybe @wookayin has some input as someone who seems to be into both Ollama and llama.cpp?
Hey everyone, we're trying to get an initial pulse of a feel for what whisper would look like in Ollama. Super rough POC but feel free to take a look at the demo and leave any high-level feedback! #6241
Hi, it will be so great if ollama can run openai/whisper, then we can chain voice and text. Is there any roadmap about it?