Closed terrytangyuan closed 8 years ago
@ilblackdragon Changed the structure a bit and added an example.
@ilblackdragon Do you have any idea why this is 0.8? https://travis-ci.org/google/skflow/jobs/99976442#L2539 It's 0.86667 every time on my local pc with the seed specified and the tests passed locally. I can certainly lower the requirement to pass it on Travis but this might indicate some internal issues?
Sorry, didn't have time to look what's happening. Definetly we should figure out why is this happening because results should be reproducible. It's fine for now to have lowered the testing score, though.
No worries at all. Take your time on these. I've made the suggested changes. :-)
@terrytangyuan hi.
Do you have any suggenstions for me to custom the learning rate decay? I want to tranform the inv in caffe (base_lr * (1 + gamma * iter) ^ (- power))
to TF , but have NO ideas where to start . thank you in advance.
You can just literally write that formula with tensorflow ops:
learning_rate = tf.pow(tf.constant(base_lr) (1 - gamma tf.train.get_global_step()), -power)
Also take a look at tf.exponential_decay
In general please ask questions on Stack Overlfow or tensorflow discussion group - there is more people looking at that.
This repo will be soon closed, as skflow is part of Tensorflow now. On Fri, Dec 23, 2016 at 12:08 AM Full-HD notifications@github.com wrote:
@terrytangyuan https://github.com/terrytangyuan hi. Do you have any suggenstions for me to custom the learning rate decay? I want to implement the inv (base_lr (1 + gamma iter) ^ (- power)) but have NO ideas where to start . thank you in advance.
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/tensorflow/skflow/pull/65#issuecomment-268949317, or mute the thread https://github.com/notifications/unsubscribe-auth/AAKtfiJVE3aS2SydLcNzZlcHIJPRsQ6sks5rK3OBgaJpZM4G9mRW .
OK, thanks a lot.
Current coverage is
92.33%