Closed muellerzr closed 1 year ago
Hi @muellerzr , Thanks for your kind suggestion. We are considering it. If we want do that, we could simply add it in mmengine, which could be used to accelerate all openmmlab projects.
We have support DeepSpeed or FSDP in MMEngine. https://github.com/open-mmlab/mmengine/pull/1183
Wish to have Accelerate and Fabric in mmengine, they are very simple and effective!
What is the problem this feature will solve?
Integration with π€ Accelerate opens up a wide variety of doors right at the get-go:
What is the feature you are proposing to solve the problem?
Modifying code to utilize π€ Accelerate is extremely straightforward, and leaves the code looking as close to plain PyTorch as possible. See below where the only changes are taking the code from a single-gpu and modifying it to be used across GPUs, TPUs, and M1:
To read more, check out these important documentation tutorials that describe various aspects of the library:
Let me know if this is of interest to the team and we can assist as much as we can towards getting an integration with mmdetection going! π
What alternatives have you considered?
Lots and lots of custom code to get it working across all devices