-
这是我修改的微调config脚本文件:
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (Checkpoi…
-
**Describe the bug**
微调后模型输出结果天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于一天半的时间,相当于…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
`Traceback (most recent call last):
File "/home/orbbec/VLM/qwen/vllm_test.py", line 11, in
llm = LLM(model="/home/orbbec/VLM/qwen/model/qwen1.5/Qwen1.5-7B-Chat",
File "/usr/local/lib/pytho…
-
### Installation Method | 安装方法与平台
OneKeyInstall (一键安装脚本-windows)
### Version | 版本
Latest | 最新版
### OS | 操作系统
Windows
### Describe the bug | 简述
Traceback (most recent call last):
File ".\requ…
-
### System Info
Hello, I am running the qwen-1.5-0.5B-Chat model . According to https://modelscope.cn/models/qwen/Qwen1.5-0.5B-Chat/summary , at the Qickstart part ,
```python3
from modelscope im…
-
使用模型:qwen1.5 7b
运行命令:`python3 -m mlx_lm.lora --model models/Qwen1.5-7B-Chat --data data/ --train --iters 1000 --batch-size 8 --lora-layers 12`
疑问:这个是不是爆内存了,导致Qwen1.5-7B-Chat-Adapters没有输出到mod…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
### Describe the bug
使用lmdeploy lite auto_a…
-
### System Info
GPU Name: NVIDIA A800
TensorRT-LLM: 0.10.0
Nvidia Driver: 535.129.03
OS: Ubuntu 22.04
triton-inference-server backend:tensorrtllm_backend
### Who can help?
_No response_
### I…
-
# Qwen1.5-MoE Support
With the increasing attention on mixture-of-experts (MoE) models, especially following the advancements heralded by Mixtral, I propose considering the integration of the Qwen1.5…