aladdinpersson / Machine-Learning-Collection

A resource for learning about Machine learning & Deep Learning
https://www.youtube.com/c/AladdinPersson
MIT License
7.59k stars 2.69k forks source link

YOLO v1 - why using Adam as the optimizer #141

Open oonisim opened 1 year ago

oonisim commented 1 year ago

train.py set the model training optimizer to Adam.

def main():
    model = Yolov1(split_size=7, num_boxes=2, num_classes=20).to(DEVICE)
    optimizer = optim.Adam(
        model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY
    )

According to the v1 paper, the training uses momentum and decay, which suggests SGD + Momentum.

We train the network for about 135 epochs on the training and validation data sets from PASCAL VOC 2007 and

  1. When testing on 2012 we also include the VOC 2007 test data for training. Throughout training we use a batch size of 64, a momentum of 0:9 and a decay of 0:0005.

Please clarify why chose to use Adam instead of SGD+Momentum.

lmalkam commented 1 year ago

While the original YOLOv1 paper used SGD with momentum and weight decay, it's worth noting that the choice of optimizer can be a hyperparameter and may not be set in stone.

Adam is an adaptive optimizer that can converge faster than SGD with momentum in some cases. Adam adjusts the learning rate for each parameter based on the gradient variance and the historical gradient, which helps in cases where the gradients for different parameters vary significantly.

In contrast, SGD with momentum adjusts the learning rate based on the moving average of the gradient, which can be less effective when the gradient variance is high. Therefore, Adam can be a good choice for neural networks that have many parameters and complex architectures like YOLOv1.

Additionally, while the original YOLOv1 paper used SGD with momentum, subsequent research has shown that Adam can outperform SGD in some cases, especially for deep learning models with complex architectures. Therefore, the choice of optimizer can depend on the specific problem and the architecture of the neural network.

@oonisim