Closed Mithadon closed 2 months ago
I had the same issue a few days ago, but then I saw: https://github.com/city96/ComfyUI-GGUF/issues/90#issuecomment-2323011648
It fixed my problem, so try that.
Thank you @RandomGitUser321 , that worked!
This seems like a common issue so I've added it to the instructions.
I've been trying to quantize a finetuned FLUX fp16 .safetensors file to Q8_0. I'm hopelessly stuck at the patching part, without which the cmake build and quantization fail. What am I doing wrong? I've tried with the most up-to-date llama.cpp, and also with commit 2fb9267. Thank you
P:\llama.cpp>git
checkout tags/b3600 HEAD is now at 2fb92678 Fix incorrect use of ctx_split for bias tensors (#9063)P:\llama.cpp>git` apply ..\lcpp.patch error: corrupt patch at line 27``