Open EYcab opened 7 months ago
Yes, I encountered the same here: loss becomes nan after some epochs. I tried different reward functions, and all the same.
I figured out the reason. You can change config.mixed_precision to "no" in base.py, such that full-precision can be enabled, and it should avoid that unet produces NaN.
Anyone knows why this unet process always produces nan results despite all the settings are done accordingly and all the other input variables are the same