-
Thanks!
-
The package complains about "torch" not being installed when it is most definitely installed.
(.env) chris@localhost:~$ pip install flash_attn
Collecting flash_attn
Using cached fla…
-
执行如下代码报错,是什么情况呢
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "/mmu_cd_ssd/zhangce07/MLLM/Qwen/Qwen-VL-Chat/"
qwen_model = AutoModelForCausalLM.from_pretrained(model_p…
-
文件地址 https://github.com/QwenLM/Qwen-Agent/blob/main/examples/qwen2vl_function_calling.py
Qwen-Agent 是git clone 本地安装的 时间20240909
使用 https://hub.docker.com/r/qwenllm/qwenvl vllm运行的 7b-int4 model
# …
-
### System Info
cuda12.2 torch2.1
### Who can help?
@byshiue
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in th…
-
### Your current environment
The output of `python collect_env.py`
```text
Your output of `python collect_env.py` here
```
### Model Input Dumps
_No response_
### 🐛 Describe the bug
…
-
运行 examples/assistant_rag.py app_gui(),点击提交按钮后, 提示SyntaxError: Unexpected token 'I', "Internal S"... is not valid JSON
已经 pip install -U "qwen-agent[rag,code_interpreter,python_executor,gui]",qwen-a…
-
I tried to run this with Gemma 2 27b it and found that it doesn't quite work. I verified that everything works with qwen/qwen-1_8b-chat.
I get this error message:
```Assertion error: All scores…
-
Great job!
[QWen](https://huggingface.co/Qwen/Qwen-14B) is an open-source model widely used by the community. Does it support the training of this model?
-
1. Why GOT-OCR2.0 only use ViTDet-80M with Qwen-0.5B. Is higher sets of parameters gonna increase the accuracy of the model? Are there any ways to demonstrate and quantize this?
2. Are there any wa…
tadkt updated
3 weeks ago