ultralytics / ultralytics

Ultralytics YOLO11 🚀
https://docs.ultralytics.com
GNU Affero General Public License v3.0
32.19k stars 6.17k forks source link

Allow Custom Optimizer During Training #6260

Closed AtomicCactus closed 10 months ago

AtomicCactus commented 12 months ago

Search before asking

Description

In the ultralytics/engine/trainer.py there is a conditional that allows the user to select from several supported optimizers:

if name in ('Adam', 'Adamax', 'AdamW', 'NAdam', 'RAdam'):
      optimizer = getattr(optim, name, optim.Adam)(g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0)
  elif name == 'RMSProp':
      optimizer = optim.RMSprop(g[2], lr=lr, momentum=momentum)
  elif name == 'SGD':
      optimizer = optim.SGD(g[2], lr=lr, momentum=momentum, nesterov=True)

The proposal would be to overload the optimizer= argument of the train() method to either accept a str or an instance of torch.optim.optimizer.Optimizer to allow users to pass in a custom Optimizer instance. The trainer code could then check whether the optimizer is compatible with the scheduler and similar things and throws an error if there's an issue.

Use case

There are newer optimizers out there, such as Adan and AdaBelief, with more coming out almost monthly. It would be really great to be able to benchmark these newer optimizers against the tried and true AdamW or SGD with YOLO models, as some of these optimizers claim faster convergence properties.

Another reason to use a custom optimizer would be to "optimize the optimizer", in other words using techniques like torch.compile(), torch.jit.script(), or torch._foreach or writing custom CUDA kernels either in raw form or using something like the Taichi framework to speed up the optimizer itself.

Additional

No response

Are you willing to submit a PR?

github-actions[bot] commented 12 months ago

👋 Hello @AtomicCactus, thank you for your interest in YOLOv8 🚀! We recommend a visit to the YOLOv8 Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 11 months ago

@AtomicCactus thanks for bringing this up, and I appreciate your thorough explanation of the benefits of allowing custom optimizers in YOLOv8.

The idea of allowing users to incorporate experimental optimizers for benchmarking performance against established ones is indeed valuable for the research and developer community. However, integrating external or custom optimizers into the training pipeline would require careful design to ensure compatibility with the existing framework, especially with regards to learning rate scheduling and other optimizer-dependent functionalities.

Nonetheless, we welcome contributions that could enhance flexibility while maintaining the integrity and robustness of the training process. If users or contributors are interested in experimenting with or integrating new optimizers, we would encourage them to fork the repository and work on these features within their copies.

In such cases, to maintain stability, they would need to ensure that the external optimizer conforms to the expected behavior of torch.optim.Optimizer. Additional code might be necessary to include checks for compatibility with learning rate schedulers and to gracefully handle any exceptions that arise from the use of untested optimizer configurations.

If you or anyone else is interested in contributing such a feature via a pull request, we recommend discussing the proposed changes in detail through an issue on GitHub first. This approach allows us to plan and review the implications of the update, as well as its potential impact on other users.

For detailed information about training customization and advanced configuration, please consult our documentation at https://docs.ultralytics.com. Your suggestions and potential contributions are valued, and they play a significant role in advancing the capabilities of YOLOv8.

github-actions[bot] commented 10 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐