To achieve a higher score, we need to tune sensitive hyperparameters.
There are far more things that can be tuned but due to the restriction of time and resources,
it would be better to tune efficient hyperparameters (which have enormous potential and cover various situations).
According to the papers (Random Augment & Asymmetric loss), there are sensitive hyperparameters.
In other words, there are hyperparameters that have potential.
(E.g. augmentation strength is sensitive to data size and model complexity)
Also, loss function and augmentation are always needed to train and data or model.
In conclusion, tuning augmentation and loss satisfy potential and coverage.
What
Make Random Augment & Asymmetric loss tunable
Why
To achieve a higher score, we need to tune sensitive hyperparameters. There are far more things that can be tuned but due to the restriction of time and resources, it would be better to tune efficient hyperparameters (which have enormous potential and cover various situations).
According to the papers (Random Augment & Asymmetric loss), there are sensitive hyperparameters. In other words, there are hyperparameters that have potential. (E.g. augmentation strength is sensitive to data size and model complexity) Also, loss function and augmentation are always needed to train and data or model.
In conclusion, tuning augmentation and loss satisfy potential and coverage.
How