-
no_gt retrieval metrics needs large amount of LLM processing.
So, use local LLM model to compute it.
+ ragas context precision need so much LLM calls. So, try to use tonic validate instead.
-
### Validations
- [ ] I believe this is a way to improve. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [ ] I'm not able to find an [open issue](https://githu…
-
## 🐛 Bug
Do not running Llama-3-8B-Instruct-q4f16_1-MLC
## To Reproduce
Steps to reproduce the behavior:
1. conda create --name mlc-prebuilt python=3.11
2. conda activate mlc-prebuilt
3…
-
hey,
thanks for providing the torchtune framework,
I have an issue with a timeout on saving a checkpoint for Llama 3.1 70B LoRa on multiple GPUs,
I am tuning on an AWS EC2 with 8xV100 GPUs…
-
**Title:** Automatically label medical data from diagnosis reports
**Project Lead:** Frank Langbein, frank@langbein.org
**Description:** We wish to automatically label medical diagnosis data (MRI,…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I'm executing following line of code:
```
new_index.storage_context.persist(pers…
-
### 🚀 The feature, motivation and pitch
Is the deepseek-v2 AWQ version supported now? When I run it, I get the following error:
```
[rank0]: File "/usr/local/lib/python3.9/dist-packages/vllm/mo…
-
### System Info
I am experimenting with TRT LLM and `flan-t5` models. My simple goal is to build engines with different configurations and tensor parallelism, then review performance. Have a DGX syst…
-
can ollama URL be configured to point to remote box?
or try use ssh tunnel to make remote ollama appear to be local
-
When I run python llava_llama_v2_visual_attack.py --n_iters 5000 --constrained --save_dir results_llava_llama_v2_constrained_16 --eps 16 --alpha 1, I meet following problems.
model = /mnt/local/LL…