Open cbresciano opened 2 months ago
These are entirely different data formats on the inside, to merge them there'd need to be something specialized to dequantize before merging then requantize at the end.
(FP16 and FP8 are both unquantized raw data so they can happily merge as normal)
Your question
I can merge with ComfyUI Flux1 models: FP16 and FP8 Kijai using LoadCheckpont and LoadDiffusioModel respectively, however with the GGUF and bnb-NF4 models I cannot even if I use the appropriate loaders. Is it a problem of the loaders or the ModelMergeFlux1 node?. We must wait for a new version of the loaders?. Because with 4 bit models cannot merge, CheckpointSave complains "no data" error.
Logs
No response
Other
No response