-
hey,
thanks for providing the torchtune framework,
I have an issue with a timeout on saving a checkpoint for Llama 3.1 70B LoRa on multiple GPUs,
I am tuning on an AWS EC2 with 8xV100 GPUs…
-
can ollama URL be configured to point to remote box?
or try use ssh tunnel to make remote ollama appear to be local
-
## 🐛 Bug
Do not running Llama-3-8B-Instruct-q4f16_1-MLC
## To Reproduce
Steps to reproduce the behavior:
1. conda create --name mlc-prebuilt python=3.11
2. conda activate mlc-prebuilt
3…
-
### 🚀 The feature, motivation and pitch
Is the deepseek-v2 AWQ version supported now? When I run it, I get the following error:
```
[rank0]: File "/usr/local/lib/python3.9/dist-packages/vllm/mo…
-
The package complains about "torch" not being installed when it is most definitely installed.
(.env) chris@localhost:~$ pip install flash_attn
Collecting flash_attn
Using cached fla…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I'm executing following line of code:
```
new_index.storage_context.persist(pers…
-
When I run python llava_llama_v2_visual_attack.py --n_iters 5000 --constrained --save_dir results_llava_llama_v2_constrained_16 --eps 16 --alpha 1, I meet following problems.
model = /mnt/local/LL…
-
I'd like to run live llava completely locally on Jetson including a web browser.
However, if I turn off wifi before starting live llava, the video won't play on the browser.
If I turn off wifi after…
-
### Validations
- [ ] I believe this is a way to improve. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [ ] I'm not able to find an [open issue](https://githu…
-
I don't understand to set the chat_llm to ollama, if there is no preparation for utility_llm and/or embedding_llm to set it to local (ollama) pendants. Yes, I assume that prompting will be a challenge…