Closed leo-gan closed 1 month ago
Any news on the implementation of the [-1, 0, 1] quantization based on the The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits article?
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Any news on the implementation of the [-1, 0, 1] quantization based on the The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits article?