Closed liuxiaozhu01 closed 6 months ago
Hi, @liuxiaozhu01. Thank you for your interest in our work! Your observation about the implementation detail is correct. The reason I apply a square operation to the attn_metric but not to the mlp_metric is because I aim to aggregate the group channel metrics of attn_head using the L2 norm, which is represented by the code snippet "W_metric = W_metric.reshape(-1, 128).sum(dim=1)". Ideally, this operation would be followed by a square root to complete the L2 norm calculation. However, I omitted the square root step since it does not affect the ordering of the elements, which is our primary concern in this context. This detail stems from our experimental observations and theoretical considerations, suggesting that treating these components differently could yield optimal pruning results.
I hope this explanation clarifies the reasoning behind our implementation choices. Should you have any further questions or need additional details, feel free to ask!
Thanks for your reply! I think i get your idea.
From my understanding, the WIFV
correspond the Eq. (5) in your paper, and after the group channel metrics of attn_head is aggregated, the metrics for every attn_head(attn_metric
in the code) is squared actually while mlp's(mlp_metric
in the code) is not. Due to the standarlization, attn_metric
and mlp_metric
are in the same scale so global sort, which means concat and sort, is reasonable.
Is that a correct understanding?
Thanks for your reply! I think i get your idea.
From my understanding, the
WIFV
correspond the Eq. (5) in your paper, and after the group channel metrics of attn_head is aggregated, the metrics for every attn_head(attn_metric
in the code) is squared actually while mlp's(mlp_metric
in the code) is not. Due to the standarlization,attn_metric
andmlp_metric
are in the same scale so global sort, which means concat and sort, is reasonable.Is that a correct understanding?
Yes, it is correct. To elaborate, the most meticulous implementation would indeed entail an initial standardization of the attn_metric
. Following this, we would proceed to square the attn_metric
values. Next, we'd aggregate the group channel metrics derived from attn_head, and take the square root of the resulting value. Finally, we'd carry out a combined sorting of the refined attn_metric
and mlp_metric
.
In practice, the performance impact resulting from the differences between the two is minimal. This is largely due to the fact that standardization ensures both metrics are normalized to the same scale, which validates the use of a global sorting procedure. I appreciate your astute observation.
It's been very rewarding. Thank you for your generous help!
Thanks for your inspiring work! I have a little question on the W_metric for self_attn.o_proj in prune_flap(), there is a square operation, while for W_metric for mlp.down_proj is different.
Im really confused. Could you help me out?