-
Pytorch version too old for fused optimizer
```
llm-full-mp-gpus.0 [stderr] [rank0]: Traceback (most recent call last):
llm-full-mp-gpus.0 [stderr] [rank0]: File "/homes/delaunap/milabench/benc…
-
Segmentation fault when using the dev container to train the llm finetune recipe:
```
nemo.collections.llm.api.finetune/0 [NeMo I 2024-08-28 07:01:29 strategies:244] Fixing mis-match between ddp-conf…
-
# URL
- https://arxiv.org/abs/2402.17193
# Affiliations
- Biao Zhang, N/A
- Zhongtao Liu, N/A
- Colin Cherry, N/A
- Orhan Firat, N/A
# Abstract
- While large language models (LLMs) often ado…
-
**Command: tune run lora_finetune_single_device --config llama3_1/8B_lora_single_device**
**Output**:
```
INFO:torchtune.utils._logging:Running LoRAFinetuneRecipeSingleDevice with resolved config:…
-
```
from langchain_openai import ChatOpenAI
import pandas as pd
glm4_base_client = ChatOpenAI(model="glm-4v-9b",
api_key="your_api_key",
base…
-
Hi, i tried finetuning both llama 3.1-8b-instruct and llama 3-8b-instruct following the notebook you provided [here](https://colab.research.google.com/drive/1XamvWYinY6FOSX9GLvnqSjjsNflxdhNc?usp=shari…
-
I have a doubt regarding finetuning. My dataset contains images and text which are not familiar to model. So while finetuning should i keep the parameters tune_llm, tune_vision as true or false? By …
-
Issue: there are many examples and is getting harder to know what they are about just from the example name
My recommendation would be to:
1)group examples into bucket folders
2)add a left column…
-
### Summary
# Motivation
WasmEdge is a lightweight inference runtime for AI and LLM applications. Build specialized and finetuned models for WasmEdge community. The model should be supported by Wa…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue y…