Closed pangchunhei closed 3 months ago
Just edit the function in the Dataset class:
def filter_smallset(self):
...
# "training -> train"
if self.split == "train":
...
else:
# add self.max_prim >= len(target) for val and test data
if self.max_prim >= len(target) >= self.filter_num:
...
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 118.00 MiB (GPU 0; 8.00 GiB total capacity; 7.02 GiB already allocated; 0 bytes free; 7.19 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I am using RTX3070 8GB with max_prim=1 and img_size=500, which I reduced both but am still not able to run the training, also tried to add gc.collect() and torch.cuda.empty_cache() but still not able to train. May I know if there are any solutions or will you provide the trained model?