Closed Sing-Li closed 2 months ago
Thanks for your issue! @Sing-Li I thought that we have already supported Internlm2.5 since it has the same model structure with Internlm2.
Thanks @tlopex Indeed it looks like the model is fully supported here. https://github.com/mlc-ai/mlc-llm/pull/2630
BUT I was not able to find any ready-to-run converted model weights on huggingface ( https://huggingface.co/mlc-ai ) do you know of another source for them?
@Sing-Li Thanks for pointing out that. I'll upload model weights to the huggingface repo later.
Hi @Sing-Li , ready-to-run converted models of internlm2.5 with 1.8B, 7B and 20B have been uploaded to HF repo of mlc-ai, sorry for being late. Thanks for @CharlieFRuan @MasterJH5574 's help.
⚙️ Request New Models
Additional context
They seem to have their own quantized model assembly + deployment pipeline already but having it under MLC LLM pipeline will add significant value for all.