Navidfoumani / ConvTran

This is a PyTorch implementation of ConvTran
MIT License
123 stars 8 forks source link

Running memory issues #11

Closed YanxuanWei closed 1 month ago

YanxuanWei commented 1 month ago

Thank you very much for providing the code! Everything works fine when I run some of the other UEA datasets, however there are some issues when I try to run the dataset EigenWorms. The dataset threw a memory exception: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 154.22 GiB. GPU 0 has a total capacty of 31.60 GiB of which 27.20 GiB is free. Process 106946 has 4.40 GiB memory in use. Of the allocated memory 3.88 GiB is allocated by PyTorch, and 222.61 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.

All settings follow the default settings in your code. May I ask if the dataset is one that requires such a large amount of memory? Or is there an error in some of my settings? The dataset is 17984 long, is it due to excessive memory usage caused by the length of the dataset. I look forward to being able to get a response from you.

Navidfoumani commented 1 month ago

Thank you for your comment. EigenWorms is indeed challenging due to its dimensions. Here are a few options to consider, which should yield similar results:

Add stride and valid padding to reduce the input size in the embedding layer. Alternatively, you can use max pooling, as it doesn't significantly affect the final results. Reduce the batch size. Run the model on the CPU.

YanxuanWei commented 1 month ago

Thank you very much for your reply, my problem has been solved! ConvTran is an excellent method, thank you for your contribution.