Closed HarperHao closed 1 year ago
Hello, it is because the data we provided for the sample run is in the shape of L length and K channels I believe, and PTB-XL is on the shape of B,K,L? So you wouldn't give an extra split as you already have batches.
Thank you very much for your reply! I debug the train.py, I found that the shape of the loaded data is not B, K, L.And it is 17441X12X100.So, according to the paper, it still needs to be splited.
Let me elaborate on the process of running the code.
Looking forward to your reply again.
The splitting line is not requiered for PTB-XL. 17441 is number of samples, 12 the channels, and 1000 the length. depending on your gpu you might be able to pass small or large batches into the model. example 4,12,1000.
For PTB-XL I would recommend you use pytorch dataloaders to split the data into desired batches. Hope this helps
Thanks for your suggestion! I modified the code as you suggested. The code runs successfully! It is being trained. I must thank you again!
Thanks for your great work! I want to train ptb-xl dataset, when I run train.py, a bug was encountered.
the reason for the bug is the size of train_ptbxl_1000.npy is 17441, It's not divisible by 160. How can I modify the code? I‘m looking forward to your response, Thanks a lot!