zyxElsa / InST

Official implementation of the paper “Inversion-Based Style Transfer with Diffusion Models” (CVPR 2023)
Apache License 2.0
512 stars 44 forks source link

During the training process, I encountered the following issue. Has anyone encountered a similar problem? Please help me to solve it, thank you! #39

Closed lzlzlz1999 closed 11 months ago

lzlzlz1999 commented 11 months ago

(ldm) lz@manager-Precision-7920-Tower:~/Documents/InST$ python main.py --base configs/stable-diffusion/v1-finetune.yaml -t --actual_resume ./models/sd/sd-v1-4.ckpt -n log1_shuimo --gpus 0, --data_root /home/lz/Documents/InST/style Global seed set to 23 Running on GPUs 0, Loading model from ./models/sd/sd-v1-4.ckpt LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Traceback (most recent call last): File "main.py", line 582, in model = load_model_from_config(config, opt.actual_resume) File "main.py", line 29, in load_model_from_config model = instantiate_from_config(config.model) File "/home/lz/Documents/InST/ldm/util.py", line 85, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict()), kwargs) File "/home/lz/Documents/InST/ldm/models/diffusion/ddpm.py", line 477, in init self.instantiate_cond_stage(cond_stage_config) File "/home/lz/Documents/InST/ldm/models/diffusion/ddpm.py", line 561, in instantiate_cond_stage model = instantiate_from_config(config) File "/home/lz/Documents/InST/ldm/util.py", line 85, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict()), kwargs) File "/home/lz/Documents/InST/ldm/modules/encoders/modules.py", line 166, in init self.tokenizer = CLIPTokenizer.from_pretrained(version) File "/home/lz/anaconda3/envs/ldm/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1764, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "main.py", line 795, in if trainer.global_rank == 0: NameError: name 'trainer' is not defined

lullcant commented 11 months ago

This is may be because hugging face is not accept for you because of the network issue. You should use a VPN to download the pretrain model from hugging face to your server or local computer, then change the path of the encoder to the model you have downloaded.

zyxElsa commented 11 months ago

Yes, I thought of the same solution.