-
When I try to evaluate the quantized AWQ models using the video evalaution script, I'm getting FileNotFoundError.
```
FileNotFoundError: No such file or directory: "/hfhub/hub/models--Efficient-La…
-
```
from ipex_llm import optimize_model
from transformers import LlavaForConditionalGeneration
model = LlavaForConditionalGeneration.from_pretrained('llava-hf/llava-1.5-7b-hf', device_map="cpu")
m…
-
This blog post described unique challenges in doing evaluation for LLMs:
https://medium.com/@ptannor/deepchecks-new-major-release-evaluation-for-llm-based-apps-82786e1ea109
Does there exist a tu…
-
### Describe the bug
So I want to evaluate a particular trace in which my answer and citations are both in the output.
So I gave the following prompt to it
![image](https://github.com/user-attachme…
-
### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.15.0-120-generic-x86_64-with-glibc2.35
- Python version: 3.12.4
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4…
-
When I run:
```
RAYON_NUM_THREADS=6 CUDA_VISIBLE_DEVICES=0,1,2,3 python3 gen_model_answer_rest.py --model-path /models/LLAMA-2-series/llama-2-70b-chat --model-id llama-2-70b-chat --datastore-path ..…
-
All steps are based on these docs.
https://ryzenai.docs.amd.com/en/latest/inst.html
https://ryzenai.docs.amd.com/en/latest/llm_flow.html
https://github.com/amd/RyzenAI-SW/blob/main/example/transfor…
-
# Overview
v3 172B exp2 ベースモデルの評価を行います。
# Details
事前学習モデルIssue: https://github.com/llm-jp/experiments/issues/9
HF変換済みモデルを用いてチェックポイントごとに評価を行います。
結果は随時 [Ablation Study spreadsheet](https://…
-
Hello,
I'm new to LLM serving and multi-modal LLMs. I'm looking for similar examples for the LongVILA model, like the one for VILA1.5 models:
```
python -W ignore llava/eval/run_vila.py --mod…
-
命令:(xtuner-env) root@autodl-container-d293479255-f53de588:~/autodl-tmp/data# xtuner train sh/internlm2_5_chat_7b_qlora_oasst1_e3_copy.py --deepspeed deepspeed_zero2
报错信息:10/18 16:45:32 - mmengine - W…