Hi, thanks for your code. I am pretty interested in the comparison between the contextual loss and the nearest neighbor search in the feature domain. The contextual loss is defined in a heuristic manner. If I understand correctly, nearest neighbor search is equivalent to minimizing the KL-divergence. Why in the paper the contextual loss always gives less artifact? Is there any intuitive explanation for this?
Hi, thanks for your code. I am pretty interested in the comparison between the contextual loss and the nearest neighbor search in the feature domain. The contextual loss is defined in a heuristic manner. If I understand correctly, nearest neighbor search is equivalent to minimizing the KL-divergence. Why in the paper the contextual loss always gives less artifact? Is there any intuitive explanation for this?
Thanks a lot!