Hey,
as it is said in the paper, the learning rate drops after every 5 epochs, so how should I set thelearning rate scheduler strategy, something like"Poly LR", "Step LR"?
During the training process, Adam optimizer with a learning rate of 1 × 10−4 is applied. Based on the GPU memory, the batch size is set to 8 for 15 epochs, and the learning rate drops after every 5 epochs.
Hey, as it is said in the paper, the learning rate drops after every 5 epochs, so how should I set the
learning rate scheduler strategy
, something like"Poly LR", "Step LR"
?During the training process, Adam optimizer with a learning rate of 1 × 10−4 is applied. Based on the GPU memory, the batch size is set to 8 for 15 epochs, and the learning rate drops after every 5 epochs.