If one image cluster of the novelty detection is very large (e.g. 4k images) the novelty detection could run out of disk space because it generates the (64 MB) novelty maps of each image first and then post-processes the maps. This can't really be fixed because it needs all novelty maps to determine the segmentation threshold and then the original maps to generate the actual segmentation. But maybe the novelty maps could be compressed to allow processing of larger image collections?
If one image cluster of the novelty detection is very large (e.g. 4k images) the novelty detection could run out of disk space because it generates the (64 MB) novelty maps of each image first and then post-processes the maps. This can't really be fixed because it needs all novelty maps to determine the segmentation threshold and then the original maps to generate the actual segmentation. But maybe the novelty maps could be compressed to allow processing of larger image collections?