Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.34k stars 3.38k forks source link

Pass parameters on train_step/validation_step/test_step #2508

Closed VinhLoiIT closed 4 years ago

VinhLoiIT commented 4 years ago

🚀 Feature

As title

Motivation

I'm currently working with Seq-to-seq architecture, which requires a variable called max_length when decoding outputs. I mean, while training, it could be fixed as a model hyperparameter. However, during testing, we could vary its value to make predict longer or shorter in need. Therefore, I think there should be a way to pass other arguments in the validating/testing phase to make it more flexible, especially with argparse. This also helps in case we have different strategies during evaluating, such as I could select either greedy or beam search algorithms.

For example: I could run

# training model with max_length = 15 and using Greedy (as default) to save training time
python train.py --max_length 15

# eval model with a longer length and use Beamsearch to increase performance
python eval.py epoch=0.ckpt --max_length 20 --using_beamsearch
github-actions[bot] commented 4 years ago

Hi! thanks for your contribution!, great first issue!

ydcjeff commented 4 years ago

@VinhLoiIT you could try this argparse docs and its best practices

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!