Open olumolu opened 4 weeks ago
WE REALLY REALLY NEED THAT MODEL!!!!!!
yes
Fully agree.
yess !!
Fully agree +1. REALLY NEED.
Please give support for 1B model for edge deployment
Hi @ParthaPRay I don't think so we will be able to run that model on edge. I tried running over L4 GPU but no luck. Though after quantization I think I might be able to run it over L4 .
--from documentation MolmoE-1B is a multimodal Mixture-of-Experts LLM with 1.5B active and 7.2B total parameters
100% +1
Dear @Streamweaver , I think it is doable. Currently I am trying various small LLMs quantized pieces to run on edge. I found success, If you can provide support for 1B for MolmoE with Q4_M or Q4_0 it would suffice.
10000% yes
Please.
yes
PLEASE PLEASE PLEASE
What is the status
Thank you ollama Team for your great work :)
Is this task dependent on resolution of https://github.com/ggerganov/llama.cpp/issues/9645? Any estimate on how long it might take to support Molmo-72B-0924 via ollama?
Help on molmo 1B
Include Molmo 1B via ollama. It helps for edge deployment.
+1 for molmo 7b D
and https://huggingface.co/allenai/Molmo-7B-D-0924 please.
What is the status for supporting this model. Why ollama does not support proper open-source models instead of supporting so called opensource llama meta or Microsoft models. @dhiltgen
Since Molmo has not made available gguf weights, it is difficult to include it in ollama or any other AI client for that matter.
Why ollama does not support proper open-source models instead of supporting so called opensource llama meta or Microsoft models.
You're talking to volunteers spending their free time doing something nice for you, for free, instead of spending time with their friends, or family, or making money, or doing something fun.
Maybe calm down with the attitude? How about you do it?
https://huggingface.co/allenai/Molmo-7B-D-0924 https://huggingface.co/allenai/Molmo-72B-0924 This models are really good and have potential and fully open-source please give support for them.
thanks.