Due to this usage of memory train_cifar100_with_xbm.py crashes even on 3090Ti with "GPU out of memory" (after it exceeds start_iteration and starts computing xbm loss)
This is a known issue marked as TODO. Currently it allows to use XBM with smaller batches. We postponed it to be handled later on --you can assign it to me if you aren't working on it already.
E.g. in semi-hard mining we use cubic matrix and it consumes lots of memory.
https://github.com/qdrant/quaterion/blob/f2be2a4a7ba00f4484090222838898ce73e7d682/quaterion/loss/triplet_loss.py#L157
Due to this usage of memory
train_cifar100_with_xbm.py
crashes even on 3090Ti with "GPU out of memory" (after it exceedsstart_iteration
and starts computing xbm loss)