when I run !python run_net.py \
--cfg configs/exp01_vidcomposer_full.yaml \
--input_video "demo_video/blackswan.mp4" \
--input_text_desc "A black swan swam in the water" \
--seed 9999
[2023-08-31 10:58:03,125] INFO: Loading ViT-H-14 model config.
[2023-08-31 10:58:15,720] WARNING: Pretrained weights (/content/vc/model_weights/open_clip_pytorch_model.bin) not found for model ViT-H-14.
Traceback (most recent call last):
File "/content/vc/run_net.py", line 36, in
main()
File "/content/vc/run_net.py", line 28, in main
inference_multi(cfg.cfg_dict)
File "/content/vc/tools/videocomposer/inference_multi.py", line 345, in inference_multi
worker(0, cfg)
File "/content/vc/tools/videocomposer/inference_multi.py", line 421, in worker
clip_encoder = FrozenOpenCLIPEmbedder(layer='penultimate',pretrained = DOWNLOAD_TO_CACHE(cfg.clip_checkpoint))
File "/content/vc/tools/videocomposer/inferencemulti.py", line 108, in init
model, , _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=pretrained)
File "/usr/local/lib/python3.10/dist-packages/open_clip/factory.py", line 151, in create_model_and_transforms
model = create_model(
File "/usr/local/lib/python3.10/dist-packages/open_clip/factory.py", line 122, in create_model
raise RuntimeError(f'Pretrained weights ({pretrained}) not found for model {model_name}.')
RuntimeError: Pretrained weights (/content/vc/model_weights/open_clip_pytorch_model.bin) not found for model ViT-H-14.
when I run !python run_net.py \ --cfg configs/exp01_vidcomposer_full.yaml \ --input_video "demo_video/blackswan.mp4" \ --input_text_desc "A black swan swam in the water" \ --seed 9999
[2023-08-31 10:58:03,125] INFO: Loading ViT-H-14 model config. [2023-08-31 10:58:15,720] WARNING: Pretrained weights (/content/vc/model_weights/open_clip_pytorch_model.bin) not found for model ViT-H-14. Traceback (most recent call last): File "/content/vc/run_net.py", line 36, in
main()
File "/content/vc/run_net.py", line 28, in main
inference_multi(cfg.cfg_dict)
File "/content/vc/tools/videocomposer/inference_multi.py", line 345, in inference_multi
worker(0, cfg)
File "/content/vc/tools/videocomposer/inference_multi.py", line 421, in worker
clip_encoder = FrozenOpenCLIPEmbedder(layer='penultimate',pretrained = DOWNLOAD_TO_CACHE(cfg.clip_checkpoint))
File "/content/vc/tools/videocomposer/inferencemulti.py", line 108, in init
model, , _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=pretrained)
File "/usr/local/lib/python3.10/dist-packages/open_clip/factory.py", line 151, in create_model_and_transforms
model = create_model(
File "/usr/local/lib/python3.10/dist-packages/open_clip/factory.py", line 122, in create_model
raise RuntimeError(f'Pretrained weights ({pretrained}) not found for model {model_name}.')
RuntimeError: Pretrained weights (/content/vc/model_weights/open_clip_pytorch_model.bin) not found for model ViT-H-14.