-
反复按照教程安装出现这个问题:
(xtuner0.1.9) root@intern-studio:~/xtuner019/xtuner# xtuner
Traceback (most recent call last):
File "/root/.local/bin/xtuner", line 33, in
sys.exit(load_entry_point('xtuner'…
Aorg updated
4 months ago
-
### Describe the issue
Issue/Error:
Loading 1.5 models works fine, but loading 1.6 models yield the error below. Note that the 1.6 models do load (despite the error) and inference works. However, tr…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.4.0+cu121 …
-
### System Info
ml.g5.12xlarge instance from AWS, with pyTorch 2.3.1, 4x A10G, CUDA 12.1
Modified dataset since I already pre-tokenized everything to avoid using time on GPU instances to reduce …
-
### System Info / 系統信息
python3.10 cuda2.2 显存24G
### Who can help? / 谁可以帮助到您?
_No response_
### Information / 问题信息
- [X] The official example scripts / 官方的示例脚本
- [ ] My own modified scripts / 我自己修…
-
Hi,when I was doing the stage-1 training, I met some problems.It seems like the problem is caused by the CUDA_DEVICES, but I can't find the device configure in the train.py.Can you help me out?
Here …
-
I used the docker image environment provided in the document. When I tried to start it using vllm, I got an error FlashAttention only supports Ampere GPUs or newer.
My graphics card is 2080ti .
Is …
-
Thank you for sharing the dataset and open-source model. Ovis employed VE + Head + Tokenize (essentially a softmax) and subsequently obtained the same hidden dimension features for the LLM.
I remain …
-
### Your current environment
Failed to import from vllm._C with ImportError("/usr/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /tmp/.conda/envs/vllm_env/lib/python3.10/…