Open PonteIneptique opened 4 years ago
Rather than issues, I'd be looking for feedback on my implementation :) There is some stuff I still have to do but this looks promising.
Hey. Nice you got it wrapped up. I have two comments on this.
I'd really try to avoid duplicating classes/scripts just because of the optimization (in this case it's the trainer class and the train script). The train script is perhaps less problematic but the Trainer class is meant to be very general and I'd prefer not to have to maintain two classes that do almost the same. If the Trainer class has to adapt slightly for it I'd be fine.
Many of the parameters we'd like to optimize are nested in the config file. You can have a look at how I deal with this in the optimize.py file and perhaps try something along those lines. There is already code to parse the opt.json files, so I think that could be reused.
Regarding 1., I agree but I am unsure about the way forward. Technically, I reused as much as I could from the Trainer class. Most of the duplicate code directly comes from the train script. Maybe the Trainer class could have a setup(settings) method though, which would even more reduce the need for duplication around.
As for 2., I technically deal with nested using path ("lr/patience"). I did not include example though. I'll look at your implementation for this.
I think something along the following line would be neat:
trainer, trainset, devset, encoder, models = Trainer.setup(settings)
I reworked the API to go towards a more unified training API. If you agree with it, I'll apply it to other scripts.
Actually better to use "devices" because it wouldn't have to be a cuda device after all.
On Sat, Jun 27, 2020 at 5:44 PM Thibault Clérice notifications@github.com wrote:
@PonteIneptique commented on this pull request.
In pie/scripts/tune.py https://github.com/emanjavacas/pie/pull/67#discussion_r446539022:
+def create_tuna_optimization(trial: optuna.Trial, fn: str, name: str, value: List[Any]):
- """ Generate tuna value generator
- Might use self one day, so...
- :param trial:
- :param fn:
- :param name:
- :param value:
- :return:
- """
- return getattr(trial, fn)(name, *value)
+class Optimizer(object):
- def init(self, settings, optimization_settings: List[Dict[str, Any]], gpus: List[int] = []):
I can definitely use cudas (plural).
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/emanjavacas/pie/pull/67#discussion_r446539022, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABPIPIZWJEPD5WDCPMKKSZTRYYHWJANCNFSM4OJE4R5Q .
-- Enrique Manjavacas
Thanks for the review ;)
Ok, this should be again seen by you. I addressed all your concerns (I think). This should be neat :)
All in all, I think this is one is ready
It's a work in progress for now, I'll need to implement multi-GPUs but I have not access to it right now.