I have noticed that with some datasets the training error goes very fast to zero and eventually to NaN as well with the following warning when creating label class:
/ZNN_Dataset.py:519: RuntimeWarning: invalid value encountered in greater
ret[0, :,:,:] = (lbl[0,:,:,:]>0).astype(self.pars['dtype'])
iteration 186047, err: 0.000, cls: 0.000, elapsed: 24.7 s/iter, learning rate: 0.009332
./znn-release/python/cost_fn.py:129: RuntimeWarning: divide by zero encountered in log
err = err + np.sum( -lbl * np.log(prop) )
./znn-release/python/cost_fn.py:129: RuntimeWarning: invalid value encountered in multiply
err = err + np.sum( -lbl * np.log(prop) )
iteration 186048, err: nan, cls: 0.000, elapsed: 24.3 s/iter, learning rate: 0.009332
So I commented the following line 516 from ZNN_Dataset.py fixing at least the quick approach to zero/NaN:
As I was not using your framework for EM segmentation but rather on "easier" structures obtained from 2-photon microscope, could the utils.fill_boundary_holes() just work poorly in some non-EM cases?
I have noticed that with some datasets the training error goes very fast to zero and eventually to NaN as well with the following warning when creating label class:
So I commented the following line 516 from
ZNN_Dataset.py
fixing at least the quick approach to zero/NaN:As I was not using your framework for EM segmentation but rather on "easier" structures obtained from 2-photon microscope, could the
utils.fill_boundary_holes()
just work poorly in some non-EM cases?