Closed piyp791 closed 1 year ago
Hi @piyp791, please consider that Essentia algorithms are not thread-safe, so you should use process-based parallelization.
The neural-network inference is the most computationally expensive part in your script. This could be speed up with GPU parallelization, which is internally implemented in Essentia and it is automatically enabled when:
Note that in this case every process blocks a GPU, so don't use more processes than available GPUs.
Hello,
I have a set of 100,000 mp3 files which I want to run the effnet-discogs model inference on. Can you suggest anything to help speed up the inference ?
I am using ThreadPool right now for parallel inference. Is this approach okay, or can this be corrected/ optimized further ?
Any help would be appreciated. Thanks!