tensorflow / skflow

Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning
Apache License 2.0
3.18k stars 439 forks source link

Custom Decay Function for Learning Rate #65

Closed terrytangyuan closed 8 years ago

codecov-io commented 8 years ago

Current coverage is 92.33%

Merging #65 into master will increase coverage by +0.01% as of c7a485a

@@            master     #65   diff @@
======================================
  Files           33      34     +1
  Stmts         1043    1070    +27
  Branches         0       0       
  Methods          0       0       
======================================
+ Hit            963     988    +25
  Partial          0       0       
- Missed          80      82     +2

Review entire Coverage Diff as of c7a485a

Powered by Codecov. Updated on successful CI builds.

terrytangyuan commented 8 years ago

@ilblackdragon Changed the structure a bit and added an example.

terrytangyuan commented 8 years ago

@ilblackdragon Do you have any idea why this is 0.8? https://travis-ci.org/google/skflow/jobs/99976442#L2539 It's 0.86667 every time on my local pc with the seed specified and the tests passed locally. I can certainly lower the requirement to pass it on Travis but this might indicate some internal issues?

ilblackdragon commented 8 years ago

Sorry, didn't have time to look what's happening. Definetly we should figure out why is this happening because results should be reproducible. It's fine for now to have lowered the testing score, though.

terrytangyuan commented 8 years ago

No worries at all. Take your time on these. I've made the suggested changes. :-)

silverlining21 commented 7 years ago

@terrytangyuan hi. Do you have any suggenstions for me to custom the learning rate decay? I want to tranform the inv in caffe (base_lr * (1 + gamma * iter) ^ (- power)) to TF , but have NO ideas where to start . thank you in advance.

ilblackdragon commented 7 years ago

You can just literally write that formula with tensorflow ops:

learning_rate = tf.pow(tf.constant(base_lr) (1 - gamma tf.train.get_global_step()), -power)

Also take a look at tf.exponential_decay

In general please ask questions on Stack Overlfow or tensorflow discussion group - there is more people looking at that.

This repo will be soon closed, as skflow is part of Tensorflow now. On Fri, Dec 23, 2016 at 12:08 AM Full-HD notifications@github.com wrote:

@terrytangyuan https://github.com/terrytangyuan hi. Do you have any suggenstions for me to custom the learning rate decay? I want to implement the inv (base_lr (1 + gamma iter) ^ (- power)) but have NO ideas where to start . thank you in advance.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/tensorflow/skflow/pull/65#issuecomment-268949317, or mute the thread https://github.com/notifications/unsubscribe-auth/AAKtfiJVE3aS2SydLcNzZlcHIJPRsQ6sks5rK3OBgaJpZM4G9mRW .

silverlining21 commented 7 years ago

OK, thanks a lot.