Closed GenoburyUkawa closed 11 months ago
As stated in the implementation details, HairCLIPv1 calculates the identity similarity between the edited result and the image after e4e inversion, while HairCLIPv2 calculates the identity similarity between the edited result and the original image.
This means that our HairCLIPv2 takes a step forward and proposes a multimodal hair editing method that is more suitable for real image editing.
Thanks for your excellent work! I would like to ask, what are the specific differences in the methods of calculating IDS between these two approaches? Why is the difference so large?