2noise / ChatTTS

A generative speech model for daily dialogue.
https://2noise.com
GNU Affero General Public License v3.0
32.53k stars 3.53k forks source link

How to use vLLM? #780

Open yikchunnnn opened 1 month ago

yikchunnnn commented 1 month ago

The Readme only mentions 'pip install vllm...'

May I know how shall I proceed to enable vLLM?

Do I need to do anything after installing vLLM? Or ChatTTS will automatically use vLLM when detecting it available?

Thank you in advance.

fumiama commented 1 month ago

Now vLLM is still under test&dev and ONLY BASIC INFER is available. It will not be enabled be default and if you are a developer, you can easily find the way to enable it by looking the parameters of Chat.load.

the-nine-nation commented 3 weeks ago

Using vllm would lead to an error: ImportError: cannot import name 'LogicalTokenBlock' from 'vllm.block' (/root/miniconda3/envs/py39/lib/python3.9/site-packages/vllm/block.py) How can i fix it?

superstring commented 3 weeks ago

Now vLLM is still under test&dev and ONLY BASIC INFER is available. It will not be enabled be default and if you are a developer, you can easily find the way to enable it by looking the parameters of Chat.load.

Does this mean that zero-shot infer is not supported yet?