may i use a bert-like model to load params of pre-train sdcup, then add some head top for task of table qa?
when i look into pre-train sdcup, can i ignore params like: "mlp_action1.linear.weight", "mlp_action1.linear.bias", "mlp_action2.linear.weight", "mlp_action2.linear.bias", "mlp_column1.linear.weight", "mlp_column1.linear.bias", "mlp_column2.linear.weight", "mlp_column2.linear.bias", "mlp_column1_single.linear.weight", "mlp_column1_single.linear.bias", "mlp_column2_single.linear.weight", "mlp_column2_single.linear.bias", "layer_norm_1.gamma", "layer_norm_1.beta", "layer_norm_2.gamma", "layer_norm_2.beta", "layer_norm_3.gamma", "layer_norm_3.beta". are these useful for fine-tune?
may i use a bert-like model to load params of pre-train sdcup, then add some head top for task of table qa?
when i look into pre-train sdcup, can i ignore params like: "mlp_action1.linear.weight", "mlp_action1.linear.bias", "mlp_action2.linear.weight", "mlp_action2.linear.bias", "mlp_column1.linear.weight", "mlp_column1.linear.bias", "mlp_column2.linear.weight", "mlp_column2.linear.bias", "mlp_column1_single.linear.weight", "mlp_column1_single.linear.bias", "mlp_column2_single.linear.weight", "mlp_column2_single.linear.bias", "layer_norm_1.gamma", "layer_norm_1.beta", "layer_norm_2.gamma", "layer_norm_2.beta", "layer_norm_3.gamma", "layer_norm_3.beta". are these useful for fine-tune?