Open bennmann opened 1 month ago
Please add llama.cpp quant functionality (convert to Q5_K_L, Q2_XS, etc) to Convert precision section:
Please add llama.cpp quant functionality (convert to Q5_K_L, Q2_XS, etc) to Convert precision section: