Hi,
I am using python3 version of this code on CentOS platform for MUSDB18 dataset. My training dataset is 100 files and test is 50 files. When I run this code on Intel Xeon processor, only 1 core is predominantly utilized out of 32. So time for each epoch is ~22min. When I switched to nVidia 1080GTX, I am still getting the same exact epoch time. Why is the performance not improving?
Hi, I am using python3 version of this code on CentOS platform for MUSDB18 dataset. My training dataset is 100 files and test is 50 files. When I run this code on Intel Xeon processor, only 1 core is predominantly utilized out of 32. So time for each epoch is ~22min. When I switched to nVidia 1080GTX, I am still getting the same exact epoch time. Why is the performance not improving?