Updates the default inference batch size to 64. This is only based on my own local experiments that this seems to have the best compromise of speed and RAM/VRAM usage. It seems to speed classification up by approx 40% (depending on the number of points per plane).
Exposes the batch size parameter in the napari GUI so users can change this if needed.
Prints timing of classification (as with detection) to help assess performance.
I've left the training batch size alone because training requires more memory, and the batch size actually affects training (unlike inference).
This PR:
I've left the training batch size alone because training requires more memory, and the batch size actually affects training (unlike inference).
Closes https://github.com/brainglobe/cellfinder/issues/353