Closed cdonat closed 5 months ago
Fair question. The answer is that this repo and the core DSP library are all-CPU for the DSP code. (iPlug2 might use a GPU for the graphics depending on the backend configuration, but I'm not sure).
One might be able to modify the code to use an NPU, and it might help. However, there are important differences between the design considerations for real-time C++ and basic Python deep learning like what's done during training; the key one is the low-latency (and therefore small-batch/buffer) context. A vanilla port of the PyTorch code could be hundreds (or even thousands) of times slower than how I've written it.
Thanks, that was very helpful.
I'm sorry to ask this question here, but I haven't found another forum, except for Facebook, which I don't use. Please feel free to point me to any other forum, or mailing list, where the question would be more appropriate.
I plan a project to use a Raspberry PI 5 as travel rig. There is a Hailo AI NPU module available for the Raspberry PI 5. As far as I understand, it is used by AI libraries, like e.g. PyTorch. I understand, that the NAM trainer would profit from it, but I'm going to train my models on the desktop computer. On the Raspi, I'll only run the Plugin.
Would the Plugin have any use for a NPU, and leave more of the CPU power to other plugins, or would I just waste my money? Looking at the Code of the Plugin I didn't see any use of libraries, that can use the NPU, but I might have missed, or misunderstood something.