Open oonisim opened 1 year ago
While the original YOLOv1 paper used SGD with momentum and weight decay, it's worth noting that the choice of optimizer can be a hyperparameter and may not be set in stone.
Adam is an adaptive optimizer that can converge faster than SGD with momentum in some cases. Adam adjusts the learning rate for each parameter based on the gradient variance and the historical gradient, which helps in cases where the gradients for different parameters vary significantly.
In contrast, SGD with momentum adjusts the learning rate based on the moving average of the gradient, which can be less effective when the gradient variance is high. Therefore, Adam can be a good choice for neural networks that have many parameters and complex architectures like YOLOv1.
Additionally, while the original YOLOv1 paper used SGD with momentum, subsequent research has shown that Adam can outperform SGD in some cases, especially for deep learning models with complex architectures. Therefore, the choice of optimizer can depend on the specific problem and the architecture of the neural network.
@oonisim
train.py set the model training optimizer to Adam.
According to the v1 paper, the training uses momentum and decay, which suggests SGD + Momentum.
Please clarify why chose to use Adam instead of SGD+Momentum.