YuanGongND / psla

Code for the TASLP paper "PSLA: Improving Audio Tagging With Pretraining, Sampling, Labeling, and Aggregation".
BSD 3-Clause "New" or "Revised" License
139 stars 16 forks source link

Output of MHA EfficientNet model #8

Open haoheliu opened 1 year ago

haoheliu commented 1 year ago

Hi Yuan,

Thanks for open-sourcing this repo. I have a quick question about the MHA EfficientNet model you proposed. When I tried the EfficientNet-b2 with the multi-head attention model, I found some values in the out variable were bigger than one, instead of between 0-1. Is that intentionally designed?

Many Thanks

YuanGongND commented 1 year ago

Hi Haohe,

Thanks for reaching out.

It has been a while since I coded the model, so I might be wrong.

In the PSLA paper, figure 2 caption, we said "We multiply the output of each branch element-wise and apply a temporal mean pooling (implemented by summation)", which is relected in

https://github.com/YuanGongND/psla/blob/7f8fafa23ef707ad63c4f3965ea1a3f0a4bb1bff/src/models/HigherModels.py#L165

I guess if you change it to x = (torch.stack(x_out, dim=0)).mean(dim=0), the range should be smaller than 1. If you just take a pretrained model and change this line of code in inference, it should not change the result (mAP). But if you change this line for training, you might not get same result with us as it scales the output and loss.

Please let me know what you think.

-Yuan

astrocyted commented 1 year ago

Hi Yuan,

I would like to double down on this issue. because I don't think it is about whether you use mean(dim=0) or sum(dim=0) in where you aggregate the output of attention heads, the issue is that the self.head_weight is an unbounded parameter:

https://github.com/YuanGongND/psla/blob/7f8fafa23ef707ad63c4f3965ea1a3f0a4bb1bff/src/models/HigherModels.py#L132

and it could end up in any value as you're not constraining it either explicitly (e.g. normalizing) or implicitly (through regularization terms). Therefore i was really surprised to see the value of all 4 heads weight to be less than 1 in your pretrained model release.

That said, you do clmap the output of the network to be [0,1] before passing it to BCELoss: https://github.com/YuanGongND/psla/blob/7f8fafa23ef707ad63c4f3965ea1a3f0a4bb1bff/src/traintest.py#L103

So rather using a smooth, squishing activation fucntion like sigmoid at the very end of the model, (whether intended or not) you are using a troublesome piece-wise continuous:

image

This means that unless you have super carefully initialized your model's parameters and a very small learning rate, the training would stop if the output goes above or below zero (zero grad). So, I've not tried to train your model from scratch, but it must have been quite tricky if not very difficult.

So do you have any explanation as to why this particular design choice with clamping and not using smooth activation functions or avoiding the need for any end activation function altogether by enforcing constraint on head weights?

astrocyted commented 1 year ago

On a different note, I see you normalize the attention values across temporal axis : https://github.com/YuanGongND/psla/blob/7f8fafa23ef707ad63c4f3965ea1a3f0a4bb1bff/src/models/HigherModels.py#L162

this would seemingly encourage the model to attend to one single temporal unit (in the output layer) at the expense of not-attending to other temporal slices. Given that many events are dynamic and have larger extent than a single unit of time, specially considering event-dense audioset recordings, what would be the inductive bias for such a choice?

Furthermore, in order to obtain these normalized attention values for each head, you first pass them through a sigmoid function and then normalize them using "division by sum" https://github.com/YuanGongND/psla/blob/7f8fafa23ef707ad63c4f3965ea1a3f0a4bb1bff/src/models/HigherModels.py#L162

is there any paticular reason for this choice of "sigmoid +normalization by sum" versus the more mainstream approach of using a softmax of attention values directly? they are not of course equivalent, as Softmax exclusively depends on the difference between values i.e $(X_i- X_j)$ s but your version does actually also depend on absolute values of $X_i$ s.

YuanGongND commented 1 year ago

Hi there,

Thanks so much for your questions. I need time to think of it. The main model architecture is from a previous paper: http://groups.csail.mit.edu/sls/archives/root/publications/2019/LoganFord_Interspeech-2019.PDF.

This means that unless you have super carefully initialized your model's parameters and a very small learning rate, the training would stop if the output goes above or below zero (zero grad). So, I've not tried to train your model from scratch, but it must have been quite tricky if not very difficult.

But before that, I want to clarify that we do not pick the random seeds or pick the success runs at all. All experiments are run 3 times and report the mean, which should be reproducible with the provided code. In the paper, we show the variance is pretty small. Your proposed ``more reasonable'' solution might lead to more stable optimization and probably better results. Have you tried that?

-Yuan