kijai / ComfyUI-CogVideoXWrapper

633 stars 35 forks source link

Can the GGUF model converted to 1.58 bits quantization? #165

Open charmandercha opened 1 day ago

charmandercha commented 1 day ago

I really do not know much about the AI world and the limitations, but if this model can convert to 1.58, maybe it will make this model more accessible?

jepjoo commented 23 hours ago

Can't be converted. The 1.58-bit Bitnet models need to be trained from scratch.

For now, there are only some early test LLM's made in that format, nothing substantial. Maybe in the future it will be more relevant, maybe not, who knows.

charmandercha commented 21 hours ago

Can't be converted. The 1.58-bit Bitnet models need to be trained from scratch.

For now, there are only some early test LLM's made in that format, nothing substantial. Maybe in the future it will be more relevant, maybe not, who knows.

But already

There is 1.58 finetuning and already a llama model with that quantization.