Open libhot opened 10 months ago
Hi, the codes and documentation are being updated and tested now, please wait. This error occurs because you need to prepare the font file yourself, we recommend Arial Unicode MS, and then place it here:
mv your/path/to/arialuni.ttf AnyText/font/Arial_Unicode.ttf
Hi, the codes and documentation are being updated and tested now, please wait. This error occurs because you need to prepare the font file yourself, we recommend Arial Unicode MS, and then place it here:
mv your/path/to/arialuni.ttf AnyText/font/Arial_Unicode.ttf
run successfully! I thought all font files supported it. But found no font selection option in webui.
even added the font, the same mistakes appeared
even added the font, the same mistakes appeared
Hi, please make sure the path and file name of your ttf is exactly: AnyText/font/Arial_Unicode.ttf
download link:https://ultralytics.com/assets/Arial.ttf then rename Arial_Unicode.ttf
after renaming the.ttf. it still not available.
it shows:
(anytext) G:\Anytext\AnyText>python inference.py 2024-01-03 22:58:10,098 - modelscope - INFO - PyTorch version 2.0.1 Found. 2024-01-03 22:58:10,098 - modelscope - INFO - TensorFlow version 2.13.0 Found. 2024-01-03 22:58:10,098 - modelscope - INFO - Loading ast index from C:\Users\JG.cache\modelscope\ast_indexer 2024-01-03 22:58:10,348 - modelscope - INFO - Loading done! Current index file version is 1.10.0, with md5 a190ecf1c6e4ccd5685917837e010a25 and a total number of 946 components indexed Traceback (most recent call last): File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\connection.py", line 174, in _new_conn conn = connection.create_connection( File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\util\connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "G:\anaconda\envs\anytext\lib\socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen httplib_response = self._make_request( File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\connectionpool.py", line 404, in _make_request self._validate_conn(conn) File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\connectionpool.py", line 1058, in _validate_conn conn.connect() File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\connection.py", line 363, in connect self.sock = conn = self._new_conn() File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\connection.py", line 186, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000002BA0D2EDE70>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "G:\anaconda\envs\anytext\lib\site-packages\requests\adapters.py", line 486, in send resp = conn.urlopen( File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\connectionpool.py", line 827, in urlopen return self.urlopen( File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\connectionpool.py", line 827, in urlopen return self.urlopen( File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\connectionpool.py", line 799, in urlopen retries = retries.increment( File "G:\anaconda\envs\anytext\lib\site-packages\urllib3\util\retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.modelscope.cn', port=443): Max retries exceeded with url: /api/v1/models/damo/cv_anytext_text_generation_editing?Revision=v1.1.0 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000002BA0D2EDE70>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\pipelines\util.py", line 35, in is_official_hubimpl = HubApi().get_model(path, revision=revision) File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\hub\api.py", line 233, in get_model r = self.session.get(path, cookies=cookies, File "G:\anaconda\envs\anytext\lib\site-packages\requests\sessions.py", line 602, in get return self.request("GET", url, kwargs) File "G:\anaconda\envs\anytext\lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, send_kwargs) File "G:\anaconda\envs\anytext\lib\site-packages\requests\sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "G:\anaconda\envs\anytext\lib\site-packages\requests\adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='www.modelscope.cn', port=443): Max retries exceeded with url: /api/v1/models/damo/cv_anytext_text_generation_editing?Revision=v1.1.0 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000002BA0D2EDE70>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:\Anytext\AnyText\inference.py", line 3, in
(anytext) G:\Anytext\AnyText>python inference.py
2024-01-03 22:59:17,613 - modelscope - INFO - PyTorch version 2.0.1 Found.
2024-01-03 22:59:17,623 - modelscope - INFO - TensorFlow version 2.13.0 Found.
2024-01-03 22:59:17,623 - modelscope - INFO - Loading ast index from C:\Users\JG.cache\modelscope\ast_indexer
2024-01-03 22:59:17,873 - modelscope - INFO - Loading done! Current index file version is 1.10.0, with md5 a190ecf1c6e4ccd5685917837e010a25 and a total number of 946 components indexed
2024-01-03 22:59:21,591 - modelscope - INFO - Use user-specified model revision: v1.1.0
2024-01-03 22:59:34,200 - modelscope - WARNING - ('PIPELINES', 'my-anytext-task', 'my-custom-pipeline') not found in ast index file
2024-01-03 22:59:34,200 - modelscope - INFO - initiate model from C:\Users\JG.cache\modelscope\hub\damo\cv_anytext_text_generation_editing
2024-01-03 22:59:34,200 - modelscope - INFO - initiate model from location C:\Users\JG.cache\modelscope\hub\damo\cv_anytext_text_generation_editing.
2024-01-03 22:59:34,210 - modelscope - INFO - initialize model from C:\Users\JG.cache\modelscope\hub\damo\cv_anytext_text_generation_editing
2024-01-03 22:59:34,210 - modelscope - WARNING - ('MODELS', 'my-anytext-task', 'my-custom-model') not found in ast index file
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.0.1+cpu)
Python 3.10.11 (you have 3.10.6)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
ControlLDM: Running in eps-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Loaded model config from [models_yaml/anytext_sd15.yaml]
Traceback (most recent call last):
File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\utils\registry.py", line 210, in build_from_cfg
return obj_cls._instantiate(args)
File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\models\base\base_model.py", line 67, in _instantiate
return cls(kwargs)
File "C:\Users\JG.cache\modelscope\modelscope_modules\cv_anytext_text_generation_editing\ms_wrapper.py", line 43, in init
self.init_model(**kwargs)
File "C:\Users\JG.cache\modelscope\modelscope_modules\cv_anytext_text_generation_editing\ms_wrapper.py", line 222, in init_model
self.model = create_model(cfg_path, cond_stage_path=clip_path).cuda().eval()
File "G:\anaconda\envs\anytext\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 126, in cuda
return super().cuda(device=device)
File "G:\anaconda\envs\anytext\lib\site-packages\torch\nn\modules\module.py", line 905, in cuda
return self._apply(lambda t: t.cuda(device))
File "G:\anaconda\envs\anytext\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "G:\anaconda\envs\anytext\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "G:\anaconda\envs\anytext\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "G:\anaconda\envs\anytext\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "G:\anaconda\envs\anytext\lib\site-packages\torch\nn\modules\module.py", line 905, in
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\utils\registry.py", line 212, in build_from_cfg return obj_cls(**args) File "C:\Users\JG.cache\modelscope\modelscope_modules\cv_anytext_text_generation_editing\ms_wrapper.py", line 320, in init super().init(model=model, auto_collate=False) File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\pipelines\base.py", line 99, in init self.model = self.initiate_single_model(model) File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\pipelines\base.py", line 53, in initiate_single_model return Model.from_pretrained( File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\models\base\base_model.py", line 183, in from_pretrained model = build_model(model_cfg, task_name=task_name) File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\models\builder.py", line 35, in build_model model = build_from_cfg( File "G:\anaconda\envs\anytext\lib\site-packages\modelscope\utils\registry.py", line 215, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') AssertionError: MyCustomModel: Torch not compiled with CUDA enabled
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:\Anytext\AnyText\inference.py", line 3, in
is it only available for the computer with NVIDIA?
python demo.py 2023-12-27 13:25:47,978 - modelscope - INFO - PyTorch version 2.1.2 Found. 2023-12-27 13:25:47,981 - modelscope - INFO - TensorFlow version 2.15.0.post1 Found. 2023-12-27 13:25:47,981 - modelscope - INFO - Loading ast index from /share/model/cv_anytext_text_generation_editing/ast_indexer 2023-12-27 13:25:48,094 - modelscope - INFO - Loading done! Current index file version is 1.10.0, with md5 099672c06c5dce8e4240f79ebe0fd960 and a total number of 946 components indexed 2023-12-27 13:25:52,146 - modelscope - INFO - Use user-specified model revision: v1.1.0 Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 56.0/56.0 [00:00<00:00, 426kB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1.06k/1.06k [00:00<00:00, 7.98MB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████▉| 5.34G/5.34G [02:42<00:00, 35.3MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 345k/345k [00:00<00:00, 2.22MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 373k/373k [00:00<00:00, 2.40MB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 69.0/69.0 [00:00<00:00, 511kB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████▉| 7.34G/7.34G [03:31<00:00, 37.2MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 85.8k/85.8k [00:00<00:00, 823kB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 10.0M/10.0M [00:00<00:00, 17.6MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 4.41k/4.41k [00:00<00:00, 289kB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 2.07k/2.07k [00:00<00:00, 19.0MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 144/144 [00:00<00:00, 1.49MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 62.5k/62.5k [00:00<00:00, 838kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 62.9k/62.9k [00:00<00:00, 935kB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 334k/334k [00:00<00:00, 2.09MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 512k/512k [00:00<00:00, 3.56MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 17.4k/17.4k [00:00<00:00, 617kB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 316/316 [00:00<00:00, 2.35MB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████▉| 1.59G/1.59G [00:50<00:00, 34.2MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 5.39k/5.39k [00:00<00:00, 419kB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 7.76k/7.76k [00:00<00:00, 32.1MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 9.26k/9.26k [00:00<00:00, 256kB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 546k/546k [00:00<00:00, 1.63MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 60.7k/60.7k [00:00<00:00, 677kB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 389/389 [00:00<00:00, 3.48MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 327k/327k [00:00<00:00, 1.53MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 2.12M/2.12M [00:00<00:00, 2.69MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 905/905 [00:00<00:00, 5.90MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 152k/152k [00:00<00:00, 1.56MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 138k/138k [00:00<00:00, 1.48MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 320k/320k [00:00<00:00, 1.85MB/s] Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 939k/939k [00:00<00:00, 2.64MB/s] 2023-12-27 13:34:17,636 - modelscope - WARNING - ('PIPELINES', 'my-anytext-task', 'my-custom-pipeline') not found in ast index file 2023-12-27 13:34:17,636 - modelscope - INFO - initiate model from /share/model/cv_anytext_text_generation_editing/damo/cv_anytext_text_generation_editing 2023-12-27 13:34:17,641 - modelscope - INFO - initiate model from location /share/model/cv_anytext_text_generation_editing/damo/cv_anytext_text_generation_editing. 2023-12-27 13:34:17,643 - modelscope - INFO - initialize model from /share/model/cv_anytext_text_generation_editing/damo/cv_anytext_text_generation_editing 2023-12-27 13:34:17,658 - modelscope - WARNING - ('MODELS', 'my-anytext-task', 'my-custom-model') not found in ast index file Traceback (most recent call last): File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/utils/registry.py", line 210, in build_from_cfg return obj_cls._instantiate(args) File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/models/base/base_model.py", line 67, in _instantiate return cls(kwargs) File "/root/.cache/modelscope/modelscope_modules/cv_anytext_text_generation_editing/ms_wrapper.py", line 43, in init self.init_model(**kwargs) File "/root/.cache/modelscope/modelscope_modules/cv_anytext_text_generation_editing/ms_wrapper.py", line 218, in init_model self.font = ImageFont.truetype(font_path, size=60) File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/PIL/ImageFont.py", line 791, in truetype return freetype(font) File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/PIL/ImageFont.py", line 788, in freetype return FreeTypeFont(font, size, index, encoding, layout_engine) File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/PIL/ImageFont.py", line 226, in init self.font = core.getfont( OSError: cannot open resource
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/utils/registry.py", line 212, in build_from_cfg return obj_cls(**args) File "/root/.cache/modelscope/modelscope_modules/cv_anytext_text_generation_editing/ms_wrapper.py", line 320, in init super().init(model=model, auto_collate=False) File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/pipelines/base.py", line 99, in init self.model = self.initiate_single_model(model) File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/pipelines/base.py", line 53, in initiate_single_model return Model.from_pretrained( File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/models/base/base_model.py", line 183, in from_pretrained model = build_model(model_cfg, task_name=task_name) File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/models/builder.py", line 35, in build_model model = build_from_cfg( File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/utils/registry.py", line 215, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') OSError: MyCustomModel: cannot open resource
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/share/ai/AnyText-main/demo.py", line 20, in
inference = pipeline('my-anytext-task', model='damo/cv_anytext_text_generation_editing', model_revision='v1.1.0')
File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 170, in pipeline
return build_pipeline(cfg, task_name=task)
File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 65, in build_pipeline
return build_from_cfg(
File "/opt/miniconda3/envs/anytext/lib/python3.10/site-packages/modelscope/utils/registry.py", line 215, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
OSError: MyCustomPipeline: MyCustomModel: cannot open resource