Traceback (most recent call last):
File "/workspace/ChatTTS/examples/cmd/stream.py", line 189, in <module>
chat.load(compile=False, use_flash_attn=True, use_vllm=True)
File "/workspace/ChatTTS/ChatTTS/core.py", line 134, in load
return self._load(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/workspace/ChatTTS/ChatTTS/core.py", line 291, in _load
gpt.prepare(compile=compile and "cuda" in str(device))
File "/workspace/ChatTTS/ChatTTS/model/gpt.py", line 189, in prepare
self.gpt = self.gpt.to(dtype=torch.float16)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1695, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'GPT' object has no attribute 'gpt'
def prepare(self, compile=False):
if self.use_flash_attn and is_flash_attn_2_available():
self.gpt = self.gpt.to(dtype=torch.float16)
if compile and not self.is_te_llama and not self.is_vllm:
try:
self.compile(backend="inductor", dynamic=True)
self.gpt.compile(backend="inductor", dynamic=True)
except RuntimeError as e:
self.logger.warning(f"compile failed: {e}. fallback to normal mode.")
Chatts/models/gpt.py
Chatts/models/gpt.py
use_vllm打开就没有 self.gpt的定义, 执行到 prepare 的时候就会报错