Closed sfslowfoodie closed 5 years ago
I should add that my MBP is running Mojave 10.14.2
Although Core ML already provides MLModelConfiguration to choose backend between CPU and GPU, there is no configuration to select which GPU to use.
If you really want to use AMD GPU, you can consider reimplement using Metal Performance Shaders. You can select which GPU to use with MPS using MTLCopyAllDevices(). As for this you can refer to another of my project CoreML-MPS.
Thanks for the reply. There appears to be an untapped potential, even the 560X could prove to be 2 to 3 times faster than the anemic 630 iGPU. I understand right now only Apple could decide to implement an improved GPU resource utilization strategy for Core ML tasks.
As it turns out, the 10.14.3 Mojave update fixes the issue, picking up the correct 560X GPU for the Core ML tasks, even with Automatic graphics switching turned ON. The gain in speed is impressive! 4kx4k to 8kx8k 2X upscale juicy stats:
Turned out to be an OS-related issue and has been fixed in 10.14.3.
Despite the Automatic graphics switching being disabled, only the UHD 630 iGPU resource on my MBP is being used, while the 560X usage stays at 0%. App compiled on Xcode 10.1. Is there any code/Xcode change necessary in order to enable the 560X usage?