Closed leanderme closed 2 years ago
Hi,
Yes, as you mentioned, the whole pipeline for processing the label is roughly as: (1001,64) -> (1024, 64) -> (small dim, 527) -> (1024, 527)
So in the beginning, you need to decide how to make (1001, 64) -> (1024,64) of your original input. In my implementation in htsat.py, at reshape_wav method, there are two ways: (1) pad 0 from 1001 to 1024, or (2) do the interpolation. Now I think it is using the (2).
So if you use the (1) padding, that means from the frame 1001 to 1024 is just the 0 data (i.e. no use). I would recommend after you get the (1024, 527) output from HTS-AT, you can just trim it to 1001, and compute the BCE loss.
If you use the (2) interpolation, that means the original input is interpolated from 1001 to 1024, all the frames contain the audio information, and you can directly do like your out_frames method, as you also map the strong label from 1001 to 1024, which is a correct way.
Between these two methods, I would recommend the (2), and your out_frames definitely does the right thing.
Hi, thank you for sharing this!
I'm trying to use the HTSAT for SED with strong labels, i.e. with known onset and offset times. I have found that with the default config, the input shape is
(batch_size, 1001, 527)
in the case of AudioSet, whereas the framewise output results in(batch_size, 1024, 527)
as implemented in the methodfoward_features
of theHTSAT_Swin_Transformer
class by:Now I wonder what the best strategy would be in case of computing the loss between the framewise output and the target labels woud be. Normally, I just would generate a target label with the same timestep size of the input spectrogram and then optimize for the BCE.
So the question is: Would you, in this case, resize the framewise output to the same size as the input timesteps and then proceed as described above? Or is there a better way?
To be more specific, would something like this make sense:
If the timesteps of the spectrogram and the featurewise output would be the same, I normally would calcultate
frame_start
andframe_end
byIn the image below, the timesteps mismatch is visualized according to the input spectrogram.
Any help is greatly (!) appreciated and thanks again for sharing your code!