I'm trying to reproduce the work "Deng, Q., Cao, J., Liu, Y., Chai, Z., Li, Q., & Sun, Z. (2020). Reference guided face component editing. IJCAI International Joint Conference on Artificial Intelligence, 2021-Janua, 502–508. https://doi.org/10.24963/ijcai.2020/70" which uses the contextual loss.
According to the paper, the input to the contextual loss are masked images, see Eqn. 7. I was able to use this loss on non-masked images without issues. However, using masked images results in NaN loss.
I'm trying to reproduce the work "Deng, Q., Cao, J., Liu, Y., Chai, Z., Li, Q., & Sun, Z. (2020). Reference guided face component editing. IJCAI International Joint Conference on Artificial Intelligence, 2021-Janua, 502–508. https://doi.org/10.24963/ijcai.2020/70" which uses the contextual loss.
According to the paper, the input to the contextual loss are masked images, see Eqn. 7. I was able to use this loss on non-masked images without issues. However, using masked images results in NaN loss.
Have anyone experienced this before?