Closed tao-bai closed 2 years ago
Hi! Glad you like our work :)
This clipping is done purely for visualization purposes. Throughout the rest of the code, there is no clipping done.
The distilled images often have extreme values, so outliers will skew the image when you do [0,1] normalization for visualizing the images, leading to really bad saturation.
2.5 std was just a value I chose that seemed to typically lead to good-looking images for visualization purposes.
You do not need to do any clipping when training on the synthetic images.
Let me know if you have any other questions!
Thanks for your prompt reply! I am clear about that now.
Hi thanks for your great work! I am curious about the clip_val. Why do you choose 2.5? why clipping needed? Could you please explain a little bit? Thanks! And when training with distilled data, we don't need clipping, right?