Open wzds2015 opened 7 years ago
The key of the batches dictionary represents the sequence length (in terms of time steps). I'm not quite sure I understand the question, but the idea is to batch sequences of same time step length together so we can train those batches together (instead of a batch being one sequence). Sorry for the late reply!
In batch_data function and run_epoch function, I can see something similar as "batches[num_time_steps].append(some thing)"
My understanding is that the keys of batches dictionary represent difference sequences (different music). Then here to use num_time_steps as key may assume every music have different length of batches (e.g music length=135, batch length = 10, then num_time_steps = 135 / 10 = 13).
I doubt that we can guarantee this assumption. Or maybe I misunderstand something?