Open tomguluson92 opened 11 months ago
cross_map_replace_steps
and self_output_replace_steps
are two parameters of controller, which will be used in this function:
https://github.com/eric-ai-lab/photoswap/blob/d631e8ec6bc1b9260f80fdcd983a602d9299aef0/utils.py#L214.
You should be able to reason out if you follow the tracing of the controller.
self.local_blend
is used for whether the some of latent image at each diffusion step of the generated image directly got swapped from the latent of the source image, so that the background pixel could be directly borrowed from the original image.
Thanks for quick reply:
But I still don't find anything that inside swapping_class.py
used cross_map_replace_steps
and self_output_replace_steps
.
The only used member functions are replace_cross_attention
& replace_self_attention
I have also check multiple times the code snippet in utils.py
.
Nothing called in self.replace_cross_attention(xxx)
and self.replace_self_attention(xxx)
. local_blend
is not used during inference either.
I think the cross-attention replacing threshold is actually defined by:
which takes cross_replace_steps as the parameter. self.cross_replace_alpha plays a role in
Dear authors,
Thanks for your brilliant work! I am wondering that when I dive into the
swapping_class.py
https://github.com/eric-ai-lab/photoswap/blob/main/swapping_class.py#L179
for understanding the usage ofcross_map_replace_steps
self_output_replace_steps
self_map_replace_steps
I didn't find the usage of these parameters in either self.replace_cross_attention(xxx) or self.replace_self_attention(xxx).
And I also want to know that the usage of
self.local_blend
, I didn't find it has been used in the code anywhere.Could you please tell me the meaning of these settings and how it actually influence the generated result?
Thanks.
justin biber checkpoint