Hello, amazing work with Guernika, I love using it.
I have a Macbook Air M1 with 8GB of RAM and to run Guernika with XL models I need to compress them. Unfortunately, the conversion fails. After further research it seems that Guernika Model Converter needs to be updated. See this fix Fix quantize-nbits flag.
Hello, amazing work with Guernika, I love using it.
I have a Macbook Air M1 with 8GB of RAM and to run Guernika with XL models I need to compress them. Unfortunately, the conversion fails. After further research it seems that Guernika Model Converter needs to be updated. See this fix Fix quantize-nbits flag.
Currently using version 7.4.1 (1)
Would you provide a new update soon?
Thanks