google / model_search

Apache License 2.0
3.26k stars 462 forks source link

this two block don't define in the block.py and don't support float64 when net = tf_slim.batch_norm(net, is_training=is_training) #35

Open davidxiaozhi opened 3 years ago

davidxiaozhi commented 3 years ago

blocks_to_use: "CIFAR_NASA_REDUCTION" blocks_to_use: "CIFAR_NASA"

this two block don't define in the block.py

davidxiaozhi commented 3 years ago

When setting tf.keras.backend. Set floatx('float64') when training based on images, since the image is float64

 if self._apply_batch_norm:
      net = tf_slim.batch_norm(net, is_training=is_training) # don't support float64 
      # net = tf_slim.batch_norm(net, is_training=is_training, fused=False) #support float64 

There is another way to cast float32 via type casting, but this also doesn't work very well. We recommend modifying block.py to provide a flexible API

alexgoft commented 3 years ago

@davidxiaozhi have you found a solution?