-
环境与requirements.txt中一致,除了用的是bitsandbytes-0.41.3
在单张80G H100上运行bash fintune_lora_llama3_8B_chat.sh,报错CUDA out of memory
下面是完整的log,请问是什么原因导致的
[2024-10-22 12:37:30,959] [INFO] [real_accelerator.py:1…
-
I'm running the Llama-3-Instruct-8B-SPPO-Iter3 model locally and am very impressed by the improved quality from the original model. I can't help but wonder what the results would be if this finetunin…
-
## 问题-1
你好,当我运行脚本`llava_llama3_8b_instruct_qlora_clip_vit_large_p14_336_e1_gpu1_finetune.py`后,对保存后的模型进行格式转换,`.pth` --> `xtuner`格式,文件结构如下:
这个模型结构与开源的模型文件结构不同,这是为什么?
**xtuner/llava-llama-3-8b-v1_1…
-
How to finetune Falcon-7B-Instruct on input or outputs of 4096 context length ?
how much VRAM i will need ?
-
Hi!
Trying to finetune a Mistral 7B v0.2 on my GTX 1080 Max-q.
Getting this error (using LLaMA Factory):
==((====))== Unsloth: Fast Mistral patching release 2024.4
\\ /| GPU: NVID…
hvico updated
4 months ago
-
# URL
- https://arxiv.org/abs/2310.05914
# Affiliations
- Neel Jain, N/A
- Ping-yeh Chiang, N/A
- Yuxin Wen, N/A
- John Kirchenbauer, N/A
- Hong-Min Chu, N/A
- Gowthami Somepalli, N/A
- …
-
### System Info
python==3.10.15
cuda==11.8-8.8.1
torch==2.4.0
The latest version of code
GPU A100_40G * 8
### Who can help?
@ziyuwan @Gebro13 @
### Information
- [X] The official example scri…
-
How to change the context length of MPT-7B-Instruct in finetuning (I keep getting the error that it's only 2048 tokens length) ?
-
### Python Version
```shell
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
```
### Pip Freeze
```shell
absl-py==2.1.0
annotated-types==0.7.0
anyio==4.0.0
argon2-cffi==23.1.…
-
# Description
I write a inference script like this:
```python
import torch
from PIL import Image
import sys
sys.path.append('./')
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAG…