-
## 概要
[自動評価スクリプト](https://github.com/llm-jp/scripts/tree/main/evaluation/installers/llm-jp-eval-v1.3.1)の自動実行スクリプト作成
- 付随して[covert script](https://github.com/llm-jp/scripts/tree/main/pretrain/scrip…
-
### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting
- [X] I have checked the existing iss…
-
# 概要
llm-jp-eval v1.4のスクリプトを作成する。
# 詳細
llm-jp-eval v1.4のインストール・Runスクリプトの作成
実行方法の選択
* オフライン評価 (vllm)
* 10倍高速化
* 変換スクリプトが必要
-
When using llm-foundry for model evaluation, multi-gpu mode does not work.
The source code is here: https://github.com/mlfoundations/open_lm/blob/main/eval/eval_openlm_ckpt.py
-
All steps are based on these docs.
https://ryzenai.docs.amd.com/en/latest/inst.html
https://ryzenai.docs.amd.com/en/latest/llm_flow.html
https://github.com/amd/RyzenAI-SW/blob/main/example/transfor…
-
when I initialize draftretriever.Reader, I meet this error.
python3 gen_model_answer_rest.py
loading the datastore ...
Traceback (most recent call last):
File "/mnt/gefei/REST/llm_judge/gen_mo…
-
[X ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
I am following [this RAGAs documentation](https://docs.ragas…
-
In choose_chink_size.ipynb
The following code:
` # create vector index
llm = OpenAI(model="gpt-3.5-turbo")
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=chunk_siz…
-
我的训练方式是分两个阶段
1. 冻结vision端(vision+resampler),只微调llm
2. 冻结llm,微调vision
3. 全部微调
但现在loss曲线很差,请问是什么问题呢
数据集为数学相关,输入题目和图片,输出题目关键点
以下为bash文件中的设置:
```
per_device_train_batch_size=1
per_device_ev…
-
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
**Your Question**
faithfulness_score: always be nan
**Code Examples**…