yeates / PromptFix

[NeurIPS 24] PromptFix: You Prompt and We Fix the Photo
Apache License 2.0
618 stars 36 forks source link

An error about "disable_hf_guidance" #4

Closed ZZQ987 closed 4 weeks ago

ZZQ987 commented 1 month ago

Hello, great job! However, I encountered a small issue. When I set the “disable_hf_guidance” parameter to False to enable hf_guidance, I received the following error:


> Traceback (most recent call last):
>   File "/data1/student/***/instruct-pix2pix/PromptFix/scripts/../process_images_json.py", line 192, in <module>
>     main()
>   File "/data1/student/***/instruct-pix2pix/PromptFix/scripts/../process_images_json.py", line 181, in main
>     z = K.sampling.sample_euler_ancestral(model_wrap_cfg, z, sigmas, extra_args=extra_args)
>   File "/data1/student/***/anaconda3/envs/ip2p/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
>     return func(*args, **kwargs)
>   File "/data1/student/***/anaconda3/envs/ip2p/lib/python3.8/site-packages/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
>     denoised = model(x, sigmas[i] * s_in, **extra_args)
>   File "/data1/student/***/anaconda3/envs/ip2p/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
>     return self._call_impl(*args, **kwargs)
>   File "/data1/student/***/anaconda3/envs/ip2p/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
>     return forward_call(*args, **kwargs)
>   File "/data1/student/***/instruct-pix2pix/PromptFix/scripts/../process_images_json.py", line 97, in forward
>     z = self.hf_guidance(z, sigma, cond, x_original=input_image)
>   File "/data1/student/***/instruct-pix2pix/PromptFix/scripts/../process_images_json.py", line 89, in hf_guidance
>     grad_cond = torch.autograd.grad(loss_hfp.requires_grad_(True), [z])[0]
>   File "/data1/student/***/anaconda3/envs/ip2p/lib/python3.8/site-packages/torch/autograd/__init__.py", line 436, in grad
>     result = _engine_run_backward(
>   File "/data1/student/***/anaconda3/envs/ip2p/lib/python3.8/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward
>     return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
> **RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.**

Have you encountered a similar situation? My environment configuration is the same as project ip2p.

ZZQ987 commented 1 month ago

more details:

  1. torch.autograd.grad(loss_hfp.requiresgrad(True), [z])

    RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

  2. loss_hfp.requiresgrad(True).grad_fn = None

  3. z.grad_fn = None

  4. requires_grad seems to be ineffective

yeates commented 1 month ago

Thank you for your attention to our work. The disable_hf_guidance parameter has been deprecated in the release version and should always be set to True. We found that online hf sampling was somewhat slow for practical use, so in this released code version, we pre-trained the same LoRA to speed up inference while maintaining high-frequency details. You can try setting enable_decoder_cond_lora to False in the config YAML if you want to see the results without hf-guidance.