I have implemented f1_loss with specific weights (with the idea is to penalize more on the class which have the least samples), then comibine with binary_cross_entropy loss.
The features used for training is MFCCs (not log-scaled mel spectrogram)
The result was good predicted for major class (0.0 = silence & 2.0 = two people are discussing) but the result on the minor class was not so good.
I have implemented f1_loss with specific weights (with the idea is to penalize more on the class which have the least samples), then comibine with binary_cross_entropy loss.
The features used for training is MFCCs (not log-scaled mel spectrogram)
The result was good predicted for major class (0.0 = silence & 2.0 = two people are discussing) but the result on the minor class was not so good.