python app/hydit_app.py
flash_attn import failed: No module named 'flash_attn'
2024-06-14 12:21:58.616 | INFO | hydit.inference:__init__:160 - Got text-to-image model root path: ckpts/t2i
2024-06-14 12:21:58.628 | INFO | hydit.inference:__init__:169 - Loading CLIP Text Encoder...
2024-06-14 12:22:01.049 | INFO | hydit.inference:__init__:172 - Loading CLIP Text Encoder finished
2024-06-14 12:22:01.049 | INFO | hydit.inference:__init__:175 - Loading CLIP Tokenizer...
2024-06-14 12:22:01.085 | INFO | hydit.inference:__init__:178 - Loading CLIP Tokenizer finished
2024-06-14 12:22:01.085 | INFO | hydit.inference:__init__:181 - Loading T5 Text Encoder and T5 Tokenizer...
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
/home/swoole/anaconda3/envs/HunyuanDiT/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py:550: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.
warnings.warn(
You are using a model of type mt5 to instantiate a model of type t5. This is not supported for all configurations of models and can yield errors.
2024-06-14 12:22:15.378 | INFO | hydit.inference:__init__:185 - Loading t5_text_encoder and t5_tokenizer finished
2024-06-14 12:22:15.378 | INFO | hydit.inference:__init__:188 - Loading VAE...
2024-06-14 12:22:15.473 | INFO | hydit.inference:__init__:191 - Loading VAE finished
2024-06-14 12:22:15.473 | INFO | hydit.inference:__init__:195 - Building HunYuan-DiT model...
Traceback (most recent call last):
File "app/hydit_app.py", line 26, in <module>
args, gen, enhancer = inferencer()
File "/home/swoole/workspace/HunyuanDiT/./sample_t2i.py", line 17, in inferencer
gen = End2End(args, models_root_path)
File "/home/swoole/workspace/HunyuanDiT/./hydit/inference.py", line 205, in __init__
model_path = model_dir / f"pytorch_model_{self.args.load_key}.pt"
NameError: name 'model_dir' is not defined
重现步骤
您运行了什么命令或脚本?
python app/hydit_app.py
您是否对代码或配置进行了任何修改?您是否理解您所修改的内容?
没有任何修改
您使用了什么数据集?
无
环境
请运行 python utils/collect_env.py 收集必要的环境信息并粘贴在此。
python utils/collect_env.py
sys.platform: linux
Python: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 4090
CUDA_HOME: None
GCC: gcc (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
PyTorch: 1.13.1
PyTorch compiling details: PyTorch built with:
GCC 9.3
C++ Version: 201402
Intel(R) oneAPI Math Kernel Library Version 2022.1-Product Build 20220311 for Intel(R) 64 architecture applications
感谢您的错误报告,我们非常感谢。
检查清单
运行出错:
重现步骤
您是否对代码或配置进行了任何修改?您是否理解您所修改的内容? 没有任何修改
您使用了什么数据集? 无
环境
python utils/collect_env.py sys.platform: linux Python: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0: NVIDIA GeForce RTX 4090 CUDA_HOME: None GCC: gcc (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0 PyTorch: 1.13.1 PyTorch compiling details: PyTorch built with:
TorchVision: 0.14.1+cu117
Bug修复
如果您已经确定了原因,可以在此提供信息。如果您愿意创建 PR 进行修复,也请在此留言,我们将不胜感激!