city96 / ComfyUI-GGUF

GGUF Quantization support for native ComfyUI models
Apache License 2.0
966 stars 60 forks source link

lcpp.patch "corrupt patch at line 27" #99

Closed Mithadon closed 2 months ago

Mithadon commented 2 months ago

I've been trying to quantize a finetuned FLUX fp16 .safetensors file to Q8_0. I'm hopelessly stuck at the patching part, without which the cmake build and quantization fail. What am I doing wrong? I've tried with the most up-to-date llama.cpp, and also with commit 2fb9267. Thank you

P:\llama.cpp>git checkout tags/b3600 HEAD is now at 2fb92678 Fix incorrect use of ctx_split for bias tensors (#9063)

P:\llama.cpp>git` apply ..\lcpp.patch error: corrupt patch at line 27``

RandomGitUser321 commented 2 months ago

I had the same issue a few days ago, but then I saw: https://github.com/city96/ComfyUI-GGUF/issues/90#issuecomment-2323011648

It fixed my problem, so try that.

Mithadon commented 2 months ago

Thank you @RandomGitUser321 , that worked!

city96 commented 2 months ago

This seems like a common issue so I've added it to the instructions.