Closed alphinside closed 4 years ago
I propose to keep the augmentation inside the designated datasets (train or eval), so that it could be more flexible rather than on the main config which could also cause some confusion.
to be noted that current implementation only support augmentation on train dataset, so it would better if we also have it in eval dataset as well since some application might also need that. cc @alphinside @alifahrri
to be noted that current implementation only support augmentation on train dataset, so it would better if we also have it in eval dataset as well since some application might also need that. cc @alphinside @alifahrri
never heard requirement to augment val dataset other than flipping for face recognition dataset. And the augmentations in train is for on the go augmentation which rely on randomness to apply augmentation which shouldnt be applied for validation dataset. In my opinion its better to encourage user augment their own validation dataset outside vortex (offline) rather than on the go (online). So currently I'm disagree with this
the use case I could think of is for test time augmentation, if we would support that.
issue description is updated for latest format
Is your feature request related to a problem? Please describe. Some experiment file field is not representative and somewhat also confusing, Thus need several refactor for some element
Describe the solution you'd like Proposed new strcuture
Additional :
model with 'backbone' params should update 'pretrained' args name with 'backbone_pretrained' to avoid misconception with the pretrained version of the model itself
Describe alternatives you've considered Also now experiment file checking should be done in each pipeline, no need centralize checking. So this need to be updated. Possible delete
vortex.utils.parser.parser
and move checking to each of pipelines.