migperfer / TriAD-ISMIR2023

Code accompayning ISMIR23 paper; TriAD: Capturing harmonics with 3D convolutions
MIT License
13 stars 0 forks source link

RuntimeError: stack expect each tensor to be equal size, but got [80000] at entry 0 and [79872] at entry 2 #1

Closed Zttt0523 closed 10 months ago

Zttt0523 commented 11 months ago

something wrong with the file at ./onsets_and_frames/utils.py and ./onsets_and_frames/train.py

migperfer commented 11 months ago

Thanks for opening this issue. Could you please paste the full stack trace? Otherwise, it is quite hard to debug.

Zttt0523 commented 11 months ago

thanks for replying Traceback (most recent calls WITHOUT Sacred internals): File "train.py", line 123, in train for i, batch in zip(loop, cycle(loader)): File "/home/zt/desktop/triad/onsets_and_frames/utils.py", line 11, in cycle for item in iterable: File "/home/zt/anaconda3/envs/triad/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next data = self._next_data() File "/home/zt/anaconda3/envs/triad/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/zt/anaconda3/envs/triad/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "/home/zt/anaconda3/envs/triad/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate return {key: default_collate([d[key] for d in batch]) for key in elem} File "/home/zt/anaconda3/envs/triad/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 74, in return {key: default_collate([d[key] for d in batch]) for key in elem} File "/home/zt/anaconda3/envs/triad/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: stack expects each tensor to be equal size, but got [80000] at entry 0 and [79872] at entry 2 i try to do some interpolations,but it seems not work.my MAESTRO dataset is complete,but something wrong with the torch.dataloader and torch.stack due to a mismatched dimensions

Zttt0523 commented 11 months ago

thanks again,my torch+cuda is 1.10+cuda111,and run on ubuntu 20.04,and strangely i have entered the training stage,and process stoped at 242/1000000

Zttt0523 commented 11 months ago

problem seems solved after i set the sequence_length=79872 instead sample_rate*5 in train.py ,now training process seems starting normally,but i still don't know if this will cause a bad train-loss?

migperfer commented 11 months ago

Having chunks of 5 seconds might not be the best, but the only way of knowing is trying I guess. I'll try to spend some time in this repo and will ping you once I fixed the DataLoader problem.

migperfer commented 10 months ago

Hi! Sorry for the delay, it should work now with commit b432e2e