-
use qwen-1.5 model
command:
> python run_pipeline.py
--prompt "Does this movie review contain a spoiler? answer Yes or No"
--task_description " Your task is to check if a movie review contains sp…
-
Building this encounters the following:
```
[...]
#########################################
Compiling LLM runtime for Linux...
#########################################
-- The C compiler i…
-
我用的是我自己的qwen的token,添加进去了但是还是不能使用,这个是哪里设置错误吗
-
### What happened + What you expected to happen
import ray
ray.init("ray://10.157.148.2:6379")
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00…
-
I have set the input maxlength to 128 and the output maxlength to 128 as well. The speed of output is very slow, taking about 40 minutes to generate one sentence. I am using the Qwen-2.5 7B model. Is …
-
# device information
```top 4.2.12(c)2024,Raffaello Bonghi [raffaello@rnext.itWebsite: https://rnext,it/jetson stats
Platform
Serial Number: [s|XX CLICK TO READ XXX]
Machine: aarch64
Hardware
Sy…
-
调用 response=bot.run(history_messages, lang='zh') 流式输出,并不是真正的流式输出,更像是迭代器拼接
[
[
{
"role": "assistant",
"content": "我是"
}
],
[
{
…
-
OPENAI 报错
``` API Error: Status Code 400, {"object":"error","message":"Only allowed now, your model Qwen2-7B","code":40301} ```
docker部署。
``` docker run -d \
--network host \
-v /data/d…
-
If run this code:
```python
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
…
-
Qwen-VL ([ArXiv](https://arxiv.org/abs/2308.12966), [GitHub](https://github.com/QwenLM/Qwen-VL), [HugingFace](https://huggingface.co/Qwen/Qwen-VL)) shows very promising results on various tasks, would…