-
Is loading Llama 3.2 model variants already possible with the current implementation? It would be amazing to potentially utilize smaller Llama 3.2 variants on mobile :) Thanks!
-
What I'm doing wrong?
FETCH DATA from: /Users/pmm17200/Documents/pinokio/api/comfy.git/app/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
HTTP Request: GET http://127.0.0.1:114…
-
Tried looking for a discussion forum but couldnt find.. pls do you mind doing a codebase or video using Llama 3.2 pls pls pls.
Thank you.
-
Would it be possible to convert/use Llama 3.2?
-
Bring up Llama 3.2 model family on Wormhole, T3K and TG
-
I would like to run Llama-3.2 11B Vision in KoboldCPP. Ollama recently added the support, so I guess it is possible to pull it here :)
Also, there is support needed for GGUF conversion!
-
When I use meta-llama/Llama-3.2-1B
Can it be fixed?
```
RuntimeError: Error(s) in loading state_dict for Transformer:
Missing key(s) in state_dict: "tok_embeddings.weight", "layers.0.attention.wq…
-
Hello there and thanks for sharing the code!
Will this solution work when using the latest Llama 3.2?
-
Hi Team,
I have attempted Knowledge Distillation using Torchtune for the 8B and 1B Instruct models. However, I still need to apply KD to the Vision Instruct model. I followed the same steps and cre…
-
### 📚 The doc issue
I use this command transform model(Llama-3.2-1B)
```
python -m examples.models.llama.export_llama --checkpoint "${MODEL_DIR}/consolidated.00.pth" -p "${MODEL_DIR}/params.json" -…