-
When I use Phi-3.5-mini-instruct model, I found that the required memory for processing inputs is much more larger than other models/what is theoretically needed.
Here is the code and the CUDA error…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.9.0
- Platform: Linux-5.4.143-2-velinux1-amd64-x86_64-with-glibc2.35
- Pyth…
-
### What is the issue?
After setting iGPU allocation to 16GB (out of 32GB) some models crash when loaded, while other mange.
```
ollama run llama3.2
Error: llama runner process has terminated: c…
-
I want to fine-tune LLaMa on data I got from a fandom wiki ([for example this page](https://minecraft.fandom.com/wiki/Iron_Ingot)) and was wondering how to design the json file with its "prompt", "inp…
-
The llama.cpp integration within the playbook does not works, anyway i have manually created the gguf file but when i try to serve the model using the llama.cpp server i am getting the following error…
-
### What behavior of the library made you think about the improvement?
Generation cannot be interleaved with function calls and return values.
### How would you like it to behave?
We should a…
lapp0 updated
8 months ago
-
报错信息:
Using downloaded and verified file: /data/MiniGPT4Qwen/lavis/../cache/dataset/llava_instruct/llava_instruction_156k.json
2024-07-02 14:04:21,365 [INFO] Building datasets...
Using downloaded a…
-
运行代码:
output_dir='lora/Meta-Llama-3-8B-Instruct-PACK'
mkdir -p ${output_dir}
CUDA_VISIBLE_DEVICES="0,1,2,3" torchrun --nproc_per_node=4 --master_port=1287 src/finetune.py \
--do_train --do_eva…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
Hey Daniel !
I have been using Unsloth to help finetune a Llama-3 instruct model on a dataset of mine.
Now that I want to run the model on many prompts (>1M), I get worried with the inference spee…