ZFTurbo / Keras-inference-time-optimizer

Optimize layers structure of Keras model to reduce computation time
MIT License
157 stars 18 forks source link

How this repo is related to fused BN in Tensorflow? #6

Open mrgloom opened 6 years ago

mrgloom commented 6 years ago

How this repo is related to fused BN in Tensorflow? is it doing about the same? https://www.tensorflow.org/api_docs/python/tf/nn/fused_batch_norm https://www.tensorflow.org/performance/performance_guide#common_fused_ops

Also maybe from Keras we can export graph to Tensorflow and then freeze it or do even more with TensorRT?

ZFTurbo commented 6 years ago

Yes, it's probably the same, but there are not enough details in description.

srihari-humbarwadi commented 6 years ago

https://github.com/srihari-humbarwadi/TensorRT-for-keras/blob/master/keras_freeze_model.py https://github.com/srihari-humbarwadi/TensorRT-for-keras/blob/master/optimize_graph.py @mrgloom this is something similar to what you were suggesting

mrgloom commented 5 years ago

Looks like in coreml tools something similar is also implemented https://github.com/apple/coremltools/blob/da988c683bc466370181e4178b089aa6f07b138f/coremltools/converters/keras/_layers2.py#L502-L522

mrgloom commented 5 years ago

Yet, another place where similar technique is used https://github.com/tensorflow/tensorflow/tree/9590c4c32dd4346ea5c35673336f5912c6072bf2/tensorflow/tools/graph_transforms#optimizing-for-deployment Here it's called batch norm folding.