tyxsspa / AnyText

Official implementation code of the paper <AnyText: Multilingual Visual Text Generation And Editing>
Apache License 2.0
4.25k stars 281 forks source link

按照官方流程安装,报错了 #11

Open chaorenai opened 9 months ago

chaorenai commented 9 months ago

2023-12-29 15:19:56,915 - modelscope - WARNING - ('PIPELINES', 'my-anytext-task', 'my-custom-pipeline') not found in ast index file 2023-12-29 15:19:56,915 - modelscope - INFO - initiate model from C:\Users\sunny.cache\modelscope\hub\damo\cv_anytext_text_generation_editing 2023-12-29 15:19:56,915 - modelscope - INFO - initiate model from location C:\Users\sunny.cache\modelscope\hub\damo\cv_anytext_text_generation_editing. 2023-12-29 15:19:56,916 - modelscope - INFO - initialize model from C:\Users\sunny.cache\modelscope\hub\damo\cv_anytext_text_generation_editing 2023-12-29 15:19:56,919 - modelscope - WARNING - ('MODELS', 'my-anytext-task', 'my-custom-model') not found in ast index file WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.0.1+cpu) Python 3.10.11 (you have 3.10.6) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.

按照官方流程安装,报错了

微信图片_20231229152448

![Uploading 微信图片_20231229152448.png…]()

tyxsspa commented 9 months ago

PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.0.1+cpu)

Hi, 检查下你安装的pytorch,是不是CPU版的?

i-am-vt commented 9 months ago

I have the same error as you, and it seems that the official provided code is not working as expected. I had to manually execute conda install pytorch==2.0.1 torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia and specify KMP_DUPLICATE_LIB_OK=TRUE to make it work. Then I encountered a second issue, which is an charset problem. If you also encounter it, add "encoding='UTF-8'" to all the "open" function calls.

StarLuckLee commented 9 months ago

我和你有同样的错误,看来官方提供的代码没有按预期工作。我必须手动执行conda install pytorch==2.0.1 torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia 并指定KMP_DUPLICATE_LIB_OK=TRUE才能使其工作。 然后我又遇到了第二个问题,就是字符集问题。如果您也遇到这种情况,请在所有打开的调用中添加“encoding='UTF-8'”。

我也碰到了这个问题,请问 " 指定KMP_DUPLICATE_LIB_OK=TRUE " 这一步怎么操作?

i-am-vt commented 9 months ago

我和你有同样的错误,看来官方提供的代码没有按预期工作。我必须手动执行conda install pytorch==2.0.1 torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia 并指定KMP_DUPLICATE_LIB_OK=TRUE才能使其工作。 然后我又遇到了第二个问题,就是字符集问题。如果您也遇到这种情况,请在所有打开的调用中添加“encoding='UTF-8'”。

我也碰到了这个问题,请问 " 指定KMP_DUPLICATE_LIB_OK=TRUE " 这一步怎么操作?

Add that to an environment variable, for example, use "set KMP_DUPLICATE_LIB_OK=TRUE" for Windows or "export KMP_DUPLICATE_LIB_OK=TRUE" for Unix-like systems.

StarLuckLee commented 9 months ago

我和你也有同样的错误,看来官方提供的代码没有按预期工作。我必须手动执行conda install pytorch==2.0.1 torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia并指定KMP_DUPLICATE_LIB_OK=TRUE权限工作。然后我又遇到了第二个问题,就是字符集问题。如果您也遇到这种情况,请在所有打开的调用中添加“encoding='UTF-8'”。

我也搞清楚了这个问题,请问“指定KMP_DUPLICATE_LIB_OK=TRUE”这一步怎么操作?

将其添加到环境变量中,例如,set KMP_DUPLICATE_LIB_OK=TRUE对于 Windows 使用“”,export KMP_DUPLICATE_LIB_OK=TRUE对于类 Unix 系统使用“”。

万分感激你的回复与指导,仿佛把我拉出无尽黑暗的深渊,感谢!