THUDM / LongCite

LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
Apache License 2.0
406 stars 29 forks source link

Is it not supported by streaming output? #6

Open CaoChensy opened 2 months ago

CaoChensy commented 2 months ago

Is it not supported by streaming output?

polanwang404 commented 2 months ago

demo.py 中的 use_vllm = True # set True to use vllm for inference 就可以了,记得安装 vllm

CaoChensy commented 2 months ago

Thanks!

leoterry-ulrica commented 2 months ago

demo.py 中的 use_vllm = True # set True to use vllm for inference 就可以了,记得安装 vllm

不起作用

Neo-Zhangjiajie commented 2 months ago

Since we need to post-process the LLM output to match the citation number such as [6-8] with the context sentences, it currently does not support stream output.

leoterry-ulrica commented 1 month ago

Since we need to post-process the LLM output to match the citation number such as [6-8] with the context sentences, it currently does not support stream output.

@zRzRzRzRzRzRzR 说的不支持。

polanwang404 commented 1 month ago

demo.py 中的 use_vllm = True # set True to use vllm for inference 就可以了,记得安装 vllm

不起作用

But after I set use_vllm to True, it did run, and achieved the similar result as the demo video image

leoterry-ulrica commented 1 month ago

demo.py 中的 use_vllm = True # set True to use vllm for inference 就可以了,记得安装 vllm

不起作用

But after I set use_vllm to True, it did run, and achieved the similar result as the demo video image

这个跟流式输出无关。