-
this is fantastic to see happening. For experimental and cost management, it would be awesome to enable open source models (mlx friendly)
-
### 你是否已经阅读并同意《Datawhale开源项目指南》?
- [X] 我已阅读并同意[《Datawhale开源项目指南》](https://github.com/datawhalechina/DOPMC/blob/main/GUIDE.md)
### 你是否已经阅读并同意《Datawhale开源项目行为准则》?
- [X] 我已阅读并同意[《Datawhale开源项目行为准则》](h…
-
-
Is loading Llama 3.2 model variants already possible with the current implementation? It would be amazing to potentially utilize smaller Llama 3.2 variants on mobile :) Thanks!
-
Allow backtracking in an instance by a given number of tokens
-
**Describe the bug**
I’ve been trying to parse text and PDF files through the Llamaparse API multiple times, but I keep encountering the same error.
The issue has persisted for over two hours with…
-
## Issue encountered
Currently, inference of open models on my Mac device is quite slow since vllm does not support mps.
## Solution/Feature
Llama.cpp does support mps and would significantly spe…
-
The checkpoints in the huggingface of Llama(13b,7b) seems cannot be directly loaded in the model when training MiniLLM since its not considered model parallelism. Is there any way to convert the weigh…
-
### 📚 The doc issue
I use this command transform model(Llama-3.2-1B)
```
python -m examples.models.llama.export_llama --checkpoint "${MODEL_DIR}/consolidated.00.pth" -p "${MODEL_DIR}/params.json" -…
-
We should add Llama models, as llama models are freely and limitedly available on platform like **Hugging face** or **groqCloud**, So we can integrate that to in **_TaxyAI._**
**Note:** _If Maintai…