Hi,
in db_augmentation and average_query_expansion you are calculating similarity matrix. But when I want to search for example in cub200_2011 dataset, you have 20k+ reference points and do not have enough memory for this. In fact, my computer with 16GB ram and gtx1080ti runs out of memory even if I search among 50 images.
Do you have any sugestions how to scale this?
Thanks,
T
Hi, in db_augmentation and average_query_expansion you are calculating similarity matrix. But when I want to search for example in cub200_2011 dataset, you have 20k+ reference points and do not have enough memory for this. In fact, my computer with 16GB ram and gtx1080ti runs out of memory even if I search among 50 images.
Do you have any sugestions how to scale this? Thanks, T