MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
Our current API has 2 dropout related limitations:
Currently, in the external tuning ruleset we read the dropout value from the hparam config and pass it to the model initialization functions. In the self-tuning ruleset there exist no convenient way to specify the dropout value in the model initialization.
Furthermore, there is no way to change the dropout value during training.
Having a workload function to change the dropout value that submitters can call will remove both of these limitations.
Our current API has 2 dropout related limitations:
Having a workload function to change the dropout value that submitters can call will remove both of these limitations.