Closed ZetangForward closed 1 year ago
Maybe a quick remedy is to clamp the range after adding noise? I'm curious why I did not happen to encounter the same bug. Did you change the std. of the noise?
why AddNoiseToBBox() is necessary ?
It is to make real and fake samples for layout classifier by applying AddNoiseToBBox() with p=0.5.
Maybe a quick remedy is to clamp the range after adding noise? I'm curious why I did not happen to encounter the same bug. Did you change the std. of the noise?
why AddNoiseToBBox() is necessary ?
It is to make real and fake samples for layout classifier by applying AddNoiseToBBox() with p=0.5.
Thanks for your reply. Yes, I clamp the range after adding noise, and this problem is solved. Maybe this problem happens when the machine is different (I guess :)
I encounter the same problem with my custom dataset. It is caused by AddNoiseToBBox
. I solved it by add data.x = torch.clamp(data.x, min=0.0, max=1.0)
to AddNoiseToBBox
. But I'm not sure if this will cause other problems.
Thank you for reporting. Since the predicted values are usually bounded to the range from 0.0 to 1.0, it probably has no side-effect.
I notice that when training FID model, this repo apply
AddNoiseToBBox
transform in fid/train.py #56. However, it may cause H or W smaller than 0, since H and W are decimal too.I face the problem below which verify my guess.
Can you explain how to fix this problem, and why
AddNoiseToBBox()
is necessary ? Thank you