openvinotoolkit / anomalib

An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
https://anomalib.readthedocs.io/en/latest/
Apache License 2.0
3.8k stars 674 forks source link

📋 [TASK] Implement Multi-GPU Training Support #2258

Open samet-akcay opened 2 months ago

samet-akcay commented 2 months ago

Implement Multi-GPU Support in Anomalib

Depends on:

Background

Anomalib currently uses PyTorch Lightning under the hood, which provides built-in support for multi-GPU training. However, Anomalib itself does not yet expose this functionality to users. Implementing multi-GPU support would significantly enhance the library's capabilities, allowing for faster training on larger datasets and more complex models.

Proposed Feature

Enable multi-GPU support in Anomalib, allowing users to easily utilize multiple GPUs for training without changing their existing code structure significantly.

Example Usage

Users should be able to enable multi-GPU training by simply specifying the number of devices in the Engine configuration:

from anomalib.data import MVTec
from anomalib.engine import Engine
from anomalib.models import EfficientAd

datamodule = MVTec(train_batch_size=1)
model = EfficientAd()
engine = Engine(max_epochs=1, accelerator="gpu", devices=2)

This configuration should automatically distribute the training across two GPUs.

Implementation Goals

  1. Seamless integration with existing Anomalib APIs
  2. Minimal code changes required from users to enable multi-GPU training
  3. Proper utilization of PyTorch Lightning's multi-GPU capabilities
  4. Consistent performance improvements when using multiple GPUs

Implementation Steps

  1. Review PyTorch Lightning's multi-GPU implementation and best practices
  2. Modify the Engine class to properly handle multi-GPU configurations
  3. Ensure all Anomalib models are compatible with distributed training
  4. Update data loading mechanisms to work efficiently with multiple GPUs
  5. Implement proper synchronization of metrics and logging across devices
  6. Add multi-GPU tests to the test suite
  7. Update documentation with multi-GPU usage instructions and best practices

Potential Challenges

Discussion Points

Next Steps

Additional Considerations

We welcome input from the community on this feature. Please share your thoughts, concerns, or suggestions regarding the implementation of multi-GPU support in Anomalib.

haimat commented 1 month ago

Hey guys, this is presumably one of the most important missing features in Anomalib. Do you have any ideas when v1.2 with multi-GPU training will be released?

samet-akcay commented 1 month ago

Hi @haimat, I agree with you, but to enable multi-gpu, we had to go through a number of refactors here and there. You could check the PRs done to feature/design-simplifications branch.

What is left to enable multi-gpu is metric refactor and visualization refactor, which we are currently working on.

haimat commented 1 month ago

That sounds great, thanks for the update. Do you have an estimation on when you might be ready with this whole change?

haimat commented 1 month ago

@samet-akcay Hello, do you have any ideas when this might be released?

samet-akcay commented 3 weeks ago

@haimat, we figured this requires quite some changes within AnomalyModule. Required changes unfortunately breaks the backwards compatibility, which is the reason why we decided to release this as part of v2.0. We are currently working on it on feature/design-simplifications branch, which will be released as v2.0.0

haimat commented 3 weeks ago

@samet-akcay Thanks for the update. Do you have an estimation, when you plan to release version 2.0?

samet-akcay commented 3 weeks ago

we aim to release it by the end of this quarter