I am trying to run the model on a custom dataset, which is significantly larger than the datasets used in this repo.
On line 520 of trainer.py while calculating k_means, a large enough dataset creates problems as it tries to compute the k_means for the whole dataset simultaneously which leads to CUDA out of memory issues.
I am trying to run the model on a custom dataset, which is significantly larger than the datasets used in this repo.
On line 520 of trainer.py while calculating k_means, a large enough dataset creates problems as it tries to compute the k_means for the whole dataset simultaneously which leads to CUDA out of memory issues.
Any leads on how this can be improved?