Open dm-mch opened 7 years ago
I created this class because I needed to calculate gradients with local loss and variables, but we need to apply calculated gradients to global variables. I also needed to share "rms" and "momentum" slots among threads, to do SharedRMSProp.
I chose to create my own class, but it might be possible to use standard RMSPropOptimizer, if we call compute_gradients() with local loss and variables, and call apply_gradients with calculated gradients and global variables.
Hello, thank you for your code! Why you are not use standart apply_gradients for rms optimizer? Any sync issue for multithreading? https://www.tensorflow.org/versions/master/api_docs/python/train/optimizers#Optimizer.apply_gradients