Mozer / talk-llama-fast

Port of OpenAI's Whisper model in C/C++ with xtts and wav2lip
MIT License
708 stars 64 forks source link

text as input #4

Open Wuzzooy opened 3 months ago

Wuzzooy commented 3 months ago

Hello, Thank you very much for your work. It's not really an issue but is it possible to add the option to write text as user input with talk llama ? Another thing, llama.cpp has an argument to set the llm amount load on multi gpu setup with --tensor-split or -ts argument, it is possible to add this argument on talk-llama ?

Mozer commented 3 months ago

'text as input' - is in my todo list. tensor-split - maybe sometime in future. Or you can examine code yourself and open a PR with your suggestions. It will help to speed up process.