CompVis / depth-fm

DepthFM: Fast Monocular Depth Estimation with Flow Matching
MIT License
395 stars 27 forks source link

OSError: runwayml/stable-diffusion-v1-5 does not appear to have a file named config.json. #16

Open wudabingm opened 7 months ago

wudabingm commented 7 months ago

Traceback (most recent call last): File "/home/lhs/project/nerf...wu/depth-fm-main/inference.py", line 113, in main(args) File "/home/lhs/project/nerf...wu/depth-fm-main/inference.py", line 64, in main model = DepthFM(args.ckpt) ^^^^^^^^^^^^^^^^^^ File "/home/lhs/project/nerf...wu/depth-fm-main/depthfm/dfm.py", line 21, in init self.vae = AutoencoderKL.from_pretrained(vae_id, subfolder="vae") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/lhs/.conda/envs/depthfm/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn return fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/lhs/.conda/envs/depthfm/lib/python3.11/site-packages/diffusers/models/modeling_utils.py", line 569, in from_pretrained config, unused_kwargs, commit_hash = cls.load_config( ^^^^^^^^^^^^^^^^ File "/home/lhs/.conda/envs/depthfm/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn return fn(args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/lhs/.conda/envs/depthfm/lib/python3.11/site-packages/diffusers/configuration_utils.py", line 402, in load_config raise EnvironmentError( OSError: runwayml/stable-diffusion-v1-5 does not appear to have a file named config.json.

GaoLL1026 commented 7 months ago

我也遇到了这个问题 请问解决了吗

Xudong-Mao commented 7 months ago

The problem may have been resolved by utilizing Google colab, but I encountered the same issue when attempting to run it locally.

wudabingm commented 7 months ago

The problem may have been resolved by utilizing Google colab, but I encountered the same issue when attempting to run it locally.

Could you please tell me how to operate google colab? In addition, if the config error is reported, I can run it locally, but not on the server

wudabingm commented 7 months ago

我也遇到了这个问题 请问解决了吗

还没有解决,好兄弟

Xudong-Mao commented 7 months ago

The problem may have been resolved by utilizing Google colab, but I encountered the same issue when attempting to run it locally.

Could you please tell me how to operate google colab? In addition, if the config error is reported, I can run it locally, but not on the server Just look at the code in the inference.ipynb file and change the Inferencedev = 'cuda:4 ' to dev = 'cuda:0' and execute all the code from start to finish.

wudabingm commented 7 months ago

The problem may have been resolved by utilizing Google colab, but I encountered the same issue when attempting to run it locally.

Could you please tell me how to operate google colab? In addition, if the config error is reported, I can run it locally, but not on the server Just look at the code in the inference.ipynb file and change the Inferencedev = 'cuda:4 ' to dev = 'cuda:0' and execute all the code from start to finish.

But I started reporting errors in the inference.ipynb load model step of the code, the same error, runwayml/stable-diffusion-v1-5 does not appear to have a file named config.json.

woshiwahah commented 6 months ago

有人解决了吗?

HF_ENDPOINT=https://hf-mirror.com python inference.py --num_steps 2 --ensemble_size 4 --img assets/dog.png --ckpt checkpoints/depthfm-v1.ckpt 这样运行这行命令有效,是因为抱抱脸连接不稳定

yhy-2000 commented 2 months ago

fix it by change pipe = StableDiffusionControlNetPipeline.from_pretrained( "./pretrained_model/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 ) to pipe = StableDiffusionControlNetPipeline.from_pretrained( "./pretrained_model/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16 )

Hxy-Gra commented 1 month ago

我自己的原因是在 from_pretrained 参数里没有正确指定 safety_checker=None 这个变量,加进去就好了