All strategies were implemented! With #18 it looks like we won't really even need these as we're not forcing the network to memorize target signal amplitudes.
WaveUNet overfits... How to prevent overfitting.
Data augmentation strategy
3 tasks, 3 dataloaders
1/ mix: premixed signal from the challenge.
2/ remix = on the fly mix while training, frozen mix at validation :warning: cannot find the right mixing rule
3/ denoise : create own mixing rule, signal stays constant, noise is modulated
1/ Mix (=premixed)
[x] augment by scaling all signals (target, noise, mix)
[x] augment by trimming mini batchs
[x] augment by adding a bit of Gaussian Noise
:bulb: other ideas for augmentation:
add clicks , harmonics, clamp the signal ...
mask out (including target and mix)
2/ Remix :lock:
= mix data on the fly, frozen mix at validation & test time :x: :lock: Blocked until we understand how to remix
[x] sample much bigger random coefficients range (~ 3-10) :x: not enough (exp 6)
All strategies were implemented! With #18 it looks like we won't really even need these as we're not forcing the network to memorize target signal amplitudes.
WaveUNet overfits... How to prevent overfitting.
Data augmentation strategy
3 tasks, 3 dataloaders
1/ Mix (=premixed)
:bulb: other ideas for augmentation:
2/ Remix :lock:
= mix data on the fly, frozen mix at validation & test time :x: :lock: Blocked until we understand how to remix
3/ Denoise:
Dropout