Thanks for your work and the code, here is a question I would like to ask you:
So I realised when I tired to use augmentation on source data, it results in some nan values rpn box loss for source training. By going through the code I noticed it's because somehow the augmentation transform make dh negative, which then results in negative values in log, thus the nan values. I wonder if you know how to mitigate the issue? This problem persists whenever I tried to apply transform on source data, but it does not seem to occur on the target data (It would be silly of me to think that it only happens on source data but works on target! That's why I am trying to get to the bottom of it.)
(p.s there is a small error in bbs2numpy function where h is computed as y2-y2, which should have been y2-y1, I changed the code but still did not resolve the problem aforementioned)
Dear @kinredon,
Thanks for your work and the code, here is a question I would like to ask you:
So I realised when I tired to use augmentation on source data, it results in some nan values rpn box loss for source training. By going through the code I noticed it's because somehow the augmentation transform make dh negative, which then results in negative values in log, thus the nan values. I wonder if you know how to mitigate the issue? This problem persists whenever I tried to apply transform on source data, but it does not seem to occur on the target data (It would be silly of me to think that it only happens on source data but works on target! That's why I am trying to get to the bottom of it.)
(p.s there is a small error in bbs2numpy function where h is computed as y2-y2, which should have been y2-y1, I changed the code but still did not resolve the problem aforementioned)