kijai / ComfyUI-CogVideoXWrapper

990 stars 60 forks source link

Can the GGUF model converted to 1.58 bits quantization? #165

Closed charmandercha closed 3 days ago

charmandercha commented 1 month ago

I really do not know much about the AI world and the limitations, but if this model can convert to 1.58, maybe it will make this model more accessible?

jepjoo commented 1 month ago

Can't be converted. The 1.58-bit Bitnet models need to be trained from scratch.

For now, there are only some early test LLM's made in that format, nothing substantial. Maybe in the future it will be more relevant, maybe not, who knows.

charmandercha commented 1 month ago

Can't be converted. The 1.58-bit Bitnet models need to be trained from scratch.

For now, there are only some early test LLM's made in that format, nothing substantial. Maybe in the future it will be more relevant, maybe not, who knows.

But already

There is 1.58 finetuning and already a llama model with that quantization.