lucfra / FAR-HO

Gradient based hyperparameter optimization & meta-learning package for TensorFlow
MIT License
186 stars 47 forks source link

pytorch support #12

Open ifangcheng opened 5 years ago

ifangcheng commented 5 years ago

This is a nice package for HPO, but it seems that it is built only on TF. Just wondering is there any version that supports pytorch? I think that will make the package more popular for HPO reseachers!

lucfra commented 5 years ago

Thanks @ifangcheng !

In fact, there is a person that's working on it; but I don't know when it will be ready. I presume that at least some part of this package (the reverse mode hypergradient) should be easier to implement in Pytorch since - as far as I know - Pytorch should support by default dynamic computational graphs. Unfortunately I don't have much experience with Pytorch myself.... but, of course, if you want to help and get involved, you'd be very welcomed! :-)

Best, Luca

AntoineHX commented 4 years ago

Hi,

Do you have any news on the work for the PyTorch implementation ? I'm really interested because i'd like to use specifically the reverse mode HG in PyTorch ! :D Thanks for your great work, it's still very helpful even only on TF !

Best, Antoine

lucfra commented 4 years ago

Hi @AntoineHX

We are shortly releasing a PyTorch implementation of an algorithm related to RTHO (real time hyperparamter optimization) for optimizing learning rate schedules online. Unfortunately at the moment we have no specific short term plans of releasing a general implementation/wrappers for reverse mode HG in PyTorch, but this may change in the future. I'll keep you updated!

Cheers, Luca

lucfra commented 4 years ago

Hi all,

just wanted to post the link of a new package for gradient-based hyperparameter tuning in PyTorch.

https://github.com/awslabs/adatune

Right now it only contains the code for running an enhanced version of RTHO, but it may grow in the future months.

Cheers, Luca