Open slawomir-gilewski-wttech opened 1 year ago
Hi
can you meet the problem:
I tried the command
python main.py \
--name teddy \
--base ./configs/perfusion_teddy.yaml \
--basedir ./ckpt \
-t True \
--gpus 0,
but got the error below
Traceback (most recent call last): File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/main.py", line 461, in model = instantiate_from_config(config.model) File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/main.py", line 137, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/perfusion/perfusion.py", line 54, in init super().init(*args, kwargs) File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/models/diffusion/ddpm.py", line 565, in init self.instantiate_cond_stage(cond_stage_config) File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/models/diffusion/ddpm.py", line 640, in instantiate_cond_stage model = instantiate_from_config(config) File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/util.py", line 175, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict()), *kwargs) File "/data_heat/rjt_project/Key-Locked-Rank-One-Editing-for-Text-to-Image-Personalization/ldm/modules/encoders/modules.py", line 120, in init self.tokenizer = CLIPTokenizer.from_pretrained(version) File "/home/user/miniconda3/envs/perfusion_v1/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1841, in from_pretrained return cls._from_pretrained( File "/home/user/miniconda3/envs/perfusion_v1/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2004, in _from_pretrained tokenizer = cls(init_inputs, init_kwargs) File "/home/user/miniconda3/envs/perfusion_v1/lib/python3.10/site-packages/transformers/models/clip/tokenization_clip.py", line 334, in init with open(vocab_file, encoding="utf-8") as vocab_handle: TypeError: expected str, bytes or os.PathLike object, not NoneType
Thanks
Hi! After modifying the config yaml file by increasing the num_vectors_per_token parameter, when I try to run the training I get an error:
I figured this is probably because placeholder_token is supposed to be a tensor, therefore I fixed it by adding
placeholder_token = torch.tensor(placeholder_token).to(device)
line to perfusion/embedding_manager.py in the else clause after checking if self.max_vectors_per_token == 1 (line 116).It seems to be working after that :)