Closed Hcnaeg closed 3 months ago
The first thing I’d try is to down-sample the images. What size are they currently? You can probably reduce each dimension by a factor of 2-4 without affecting the scores much, which will give a significant speed up. If it’s still too slow, you can also reduce ncs_to_check to 5, info_sub sample to 0.2 and maybe num_levels to 3. Are you using the parameter values from the example in the readme?
What size are they currently?
They are images from public dataset (e.g. COCO, the original resolution is about 640×480), do you think it would affect the scores too much if I resize them to about 64×64?
you can also reduce ncs_to_check to 5, info_sub sample to 0.2 and maybe num_levels to 3. Are you using the parameter values from the example in the readme?
Yes I am using the given parameters in the readme, I will try modifying the parameters, thanks for your advice
Reducing the resolution could lower the scores a bit but if you do it for all images you're comparing, the comparison should be the same. I'd recommend trying first to resize to sth like 224*224. You can check on a small sample what the effect is for both speed and final score, and then maybe reduce further if needed.
Also, depending on your application, it may not be necessary to compute the metric on all 100k images. You could most likely get a very close approximation if you select some representative sample of say 1k images (per class, if you have multiple classes), and compute the mean on that sample.
Thanks for your excellent work and your open source code. I'm now looking to use this method to evaluate the complexity of a dataset of images over 100k in size, but I've found that it takes about 9s per image (the resolution is 224x224), which takes too long for my dataset. I suspect that the most time-consuming part is in the running of the GMM algorithm, but this part is already optimized by sklearn, so running with multiprocessing didn't bring a significant speedup. Do you have any thoughts on speeding up the code run?