Open SDonkelaarGDD opened 11 months ago
Hello there! We'd love this change! Want to file a PR?
@capjamesg thanks for the quick reply! I'll draft a PR today đź‘Ť
Could you also include a benchmark to see if this improves speed? I have had mixed results with MPS (it even degrades performance in some cases depending on how few of the model’s ops it supports).
@yeldarby yes, I was just doing that.
I'm running the tests on an Apple M1 Max, 14-inch model from 2021 with 32 GB ram. It seems that CPU performance is quicker compared to MPS.
When I'm training CV models using MPS, I usually notice a significant speedup. But now this is not the case.
Yeh GroundingDINO is probably a particularly perverse case given they wrote some of their own CUDA kernels that can’t possibly be supported on MPS out of the box.
I agree. In that case, I think it's better to keep using CPU when running inference on ARM MacBooks.
If you are still interested in a PR, adding the results of your investigation to the README would be valuable for people working with this package!
@capjamesg sorry for the late reply, work caught up.
I can add some metrics to the README this weekend. I was thinking of using GroundingDINO to extract bounding boxes of numbers using the MNIST dataset. Afterwards, runtimes of mps and cpu can easily be compared.
That would be excellent!
Hey,
In the file
autodistill_grounding_dino/helpers.py
, I see that it only supportscuda
andcpu
. Is it possible to add another check formps
? It could look like something in the line of the following:In this way, Apple ARM chips can be leveraged for inference. Let me know if this is desirable. If so, I am happy to make a PR.