Open davidxiaozhi opened 3 years ago
When setting tf.keras.backend. Set floatx('float64') when training based on images, since the image is float64
if self._apply_batch_norm:
net = tf_slim.batch_norm(net, is_training=is_training) # don't support float64
# net = tf_slim.batch_norm(net, is_training=is_training, fused=False) #support float64
There is another way to cast float32 via type casting, but this also doesn't work very well. We recommend modifying block.py to provide a flexible API
@davidxiaozhi have you found a solution?
blocks_to_use: "CIFAR_NASA_REDUCTION" blocks_to_use: "CIFAR_NASA"
this two block don't define in the block.py