Closed NyakoFox closed 1 year ago
This may be related: https://github.com/coqui-ai/TTS/issues/2455#issuecomment-1565839898 Try reinstalling torch using index-url https://download.pytorch.org/whl/cpu
Works perfectly, thanks!
This is unrelated but I don't think I should open a new issue for another question -- what're the ram requirements? The script gets OOM killed, even if the only enabled module is caption
.
Not exactly sure nowadays, but up to 4-6 GB, depending on the chosen modules. Running full unquantized models on CPU unfortunately has high RAM requirements.
If you're just interested in text classification (e.g. emotion sprites), I'm currently working on running the quantized model natively in JavaScript, Summary didn't show itself to be well-performing, but Caption would not be available to be ported, since right now BLIP models can't be run in JS.
I was hoping that Caption alone would be small enough to fit within my available RAM (seems to just be 3.8GB available however which is lower than I thought) but it seems like it's not. Thanks anyway!
When trying to run SillyTavern-Extras, I get "Illegal instruction" with no further output. By doing some debug prints, it seems to be on the line
from transformers import AutoTokenizer, AutoProcessor, pipeline