Closed dchaley closed 3 months ago
Note: the TF Keras docs recommend adding an annotation to the type rather than specifying it at load.
We tried this but ... no dice 🎲 likely, we need to mess with the module internals somehow, or, we need it in the upstream DeepCell source.
I'm documenting this for posterity, however I'm not sure it's worth it to investigate further. If/when we use models with different custom objects, we may want to revisit. Basically, the tradeoff is whether or not it's ok to hardcode a list of supported custom objects. My bet: yes, for quite some time 🤔 (even if we retrain the model it'll be on new data, not a new model structure)
LOL! It was HDF5 all along: not .keras
(which was released in a later TensorFlow version)
The post TensorFlow Performance: Loading Models revealed just how much the model format affects its load time. This graph is the punchline:
We loaded the SavedModel format and simply resaved it as
.h5
. To reload it, we needed to reference the DeepCell layer typeLocation2D
in the model loader.The results are astonishing! From local benchmarking:
A reduction of ~11.5s, or ~90%. 🎉
This PR also cleans up some model fetching we were doing before the mesmer-in-pieces app became a proper module. Oops 🙈 Sorry @bnovotny
WIP for: #262 Open work: test & get load times on cloud