Closed Penguin5353 closed 1 year ago
yea i had .cuda models working before but then they did an update to the webui and its broken now. so annoying
Did you set this (when on wsl)?
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
Did you set this (when on wsl)?
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
No, I don't use wsl.
I reinstalled via the one-click installer and that resolved the issue, but I'll keep the issue open for now as this doesn't mean a complete resolution of the issue.
sometimes you gotta reinstall requirements when things get updated.
Describe the bug
I followed the manual at this link. (https://github.com/oobabooga/text-generation-webui/wiki/llama.cpp-models) Here is the executable code I started with.
pip install -r requirements.txt -U
After running this code, It was able to recognize GGML models just fine. but not models like the generic .pt .safetensor.
Is there an existing issue for this?
Reproduction
This is the code I ran when I added the llama.cpp module.
conda activate textgen cd C:\Users\KHJ\text-generation-webui pip install -r requirements.txt -U
Here's the code when I ran the model as usual after adding the llama.cpp module.
conda activate textgen cd C:\Users\KHJ\text-generation-webui python server.py --model Alpaca-native-7b-4bit --wbits 4 --groupsize 128 --extensions api google_translate whisper_stt silero_tts elevenlabs_tts --no-stream --chat
Screenshot
No response
Logs
System Info