CASIA-IVA-Lab / FLAP

[AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models
https://arxiv.org/abs/2312.11983
Apache License 2.0
35 stars 10 forks source link

Question on the calculation of W_metric for 'self_attn.o_proj' in prune_flap() #6

Closed liuxiaozhu01 closed 6 months ago

liuxiaozhu01 commented 6 months ago

Thanks for your inspiring work! I have a little question on the W_metric for self_attn.o_proj in prune_flap(), there is a square operation, while for W_metric for mlp.down_proj is different.

        for name in subset:
            if name == 'self_attn.o_proj':
                W_metric = metrics[args.metrics](wrapped_layers, subset, name) ** 2    # sqaure is needed
                if args.structure == "UL-UM":
                    W_metric = W_metric.reshape(-1, 128).sum(dim=1)
                    thresh = torch.sort(W_metric.cuda())[0][int(args.pruning_ratio*layer.self_attn.num_heads)].cpu()
                    W_mask = (W_metric>=thresh)
                    attn_mask.append(W_mask)
                elif args.structure == "UL-MM":
                    W_metric = W_metric.reshape(-1, 128).sum(dim=1)
                    thresh = torch.sort(W_metric.cuda())[0][args.remove_heads // len(layers)].cpu()
                    W_mask = (W_metric>=thresh)
                    attn_mask.append(W_mask)
                else:
                    attn_metric_list.append(W_metric.cpu())
                attn_baseline_inp_list.append(wrapped_layers[name].baseline_inp.type(torch.half))
            else:
                W_metric = metrics[args.metrics](wrapped_layers, subset, name)    # no square
                if args.structure == "UL-UM":
                    thresh = torch.sort(W_metric.cuda())[0][int(W_metric.numel()*args.pruning_ratio)].cpu()
                    W_mask = (W_metric>=thresh)
                    mlp_mask.append(W_mask)
                elif args.structure == "UL-MM":
                    thresh = torch.sort(W_metric.cuda())[0][cal_remove_neuron(args, model)].cpu()
                    W_mask = (W_metric>=thresh)
                    mlp_mask.append(W_mask)
                else:
                    mlp_metric_list.append(W_metric.cpu())
                mlp_baseline_inp_list.append(wrapped_layers[name].baseline_inp.type(torch.half))
            wrapped_layers[name].free()

Im really confused. Could you help me out?

an-yongqi commented 6 months ago

Hi, @liuxiaozhu01. Thank you for your interest in our work! Your observation about the implementation detail is correct. The reason I apply a square operation to the attn_metric but not to the mlp_metric is because I aim to aggregate the group channel metrics of attn_head using the L2 norm, which is represented by the code snippet "W_metric = W_metric.reshape(-1, 128).sum(dim=1)". Ideally, this operation would be followed by a square root to complete the L2 norm calculation. However, I omitted the square root step since it does not affect the ordering of the elements, which is our primary concern in this context. This detail stems from our experimental observations and theoretical considerations, suggesting that treating these components differently could yield optimal pruning results.

I hope this explanation clarifies the reasoning behind our implementation choices. Should you have any further questions or need additional details, feel free to ask!

liuxiaozhu01 commented 6 months ago

Thanks for your reply! I think i get your idea.

From my understanding, the WIFV correspond the Eq. (5) in your paper, and after the group channel metrics of attn_head is aggregated, the metrics for every attn_head(attn_metric in the code) is squared actually while mlp's(mlp_metric in the code) is not. Due to the standarlization, attn_metric and mlp_metric are in the same scale so global sort, which means concat and sort, is reasonable.

Is that a correct understanding?

an-yongqi commented 6 months ago

Thanks for your reply! I think i get your idea.

From my understanding, the WIFV correspond the Eq. (5) in your paper, and after the group channel metrics of attn_head is aggregated, the metrics for every attn_head(attn_metric in the code) is squared actually while mlp's(mlp_metric in the code) is not. Due to the standarlization, attn_metric and mlp_metric are in the same scale so global sort, which means concat and sort, is reasonable.

Is that a correct understanding?

Yes, it is correct. To elaborate, the most meticulous implementation would indeed entail an initial standardization of the attn_metric. Following this, we would proceed to square the attn_metric values. Next, we'd aggregate the group channel metrics derived from attn_head, and take the square root of the resulting value. Finally, we'd carry out a combined sorting of the refined attn_metric and mlp_metric.

In practice, the performance impact resulting from the differences between the two is minimal. This is largely due to the fact that standardization ensures both metrics are normalized to the same scale, which validates the use of a global sorting procedure. I appreciate your astute observation.

liuxiaozhu01 commented 6 months ago

It's been very rewarding. Thank you for your generous help!