Closed rombodawg closed 7 months ago
Sorry if the issue isnt originating with you. I honestly dont know who to open the issues with at this point. Ive already opens like 4 other issues, with llamacpp, oobagooba, and lm studio
Using llamacpp convert.py to quantize, oobagooba and lm studio to inference
Im uploading the model files for the merges if anyone wants to do some debugging. Should be in the next 10 hours or so. Sorry slow internet.
Follow the mulit-thread. And check out my model for debugging.
Thread links: https://github.com/lmstudio-ai/configs/issues/21 https://github.com/ggerganov/llama.cpp/issues/5706 https://github.com/arcee-ai/mergekit/issues/181 https://github.com/oobabooga/text-generation-webui/issues/5562
I just want to point out this issue has been resolved
Llamacpp-python (text generation web ui dev branch) and LM studio have both added support for Gemma models. However when merging Gemma models, then converting to GGUF, the resulting model does not load in either UI.
Lm studio error:
Text generation web ui dev branch (llamacpp-python) error