Open reiinakano opened 5 years ago
To decrease the actual size of the model served over the web, I quantized the weights to 4 significant digits by dumping the values originally to a json INT16 array (that is base64 coded). When I load them to tensorflow.js (well, it's a really old version of it when it was still called deeplearn.js), I converted it back to floats by dividing each value by 10000.
I think with the newer tools available in tf.js, there are now much better ways to serve models efficiently over the web. I think there are plans to have these quantization schemes in the library.
Ah, that makes sense. I will stick with the original floats for stopping and restarting long running training then. Thanks!
If you are looking to train these models I would recommend training the python-based versions first on your environment since they are already known to work on the 2 environments described in the paper. Good luck!
On Wed, Jan 9, 2019 at 6:08 PM Reiichiro Nakano notifications@github.com wrote:
Ah, that makes sense. I will stick with the original floats for stopping and restarting long running training then. Thanks!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/worldmodels/worldmodels.github.io/issues/16#issuecomment-452623288, or mute the thread https://github.com/notifications/unsubscribe-auth/AGBoHro3aTjhllwaGuNr_FagGTFhA6IMks5vBbGagaJpZM4ZvE20 .
I've been playing around a bit with the world models code and using it in my own applications, and I noticed that your model saving code converts the float weights to integers by multiplying by 10000. Do you have any special reason for saving to ints and the 10000 value? I find that in some cases, my model (where I've adapted the json saving code from here) screws up when I reload it from the saved weights. I'm suspecting it's because of this loss in precision but I'm not 100% sure. Just wondering if you've experienced any weirdness from this in the past.