macanv / BERT-BiLSTM-CRF-NER

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services
https://github.com/macanv/BERT-BiLSMT-CRF-NER
4.68k stars 1.25k forks source link

你好,如何操作可以在bert层和crf层用不同学习率训练 #306

Closed hantaozi closed 4 years ago

hantaozi commented 4 years ago

因为感觉crf层的学习率10*-5左右太小了

macanv commented 4 years ago

可以去请教一下苏神,,,用他的CRF

remonly commented 4 years ago

请问楼主有解决方案了吗,我这边也遇到了这个问题

remonly commented 4 years ago

请问楼主有解决方案了吗,我这边也遇到了这个问题

已经解决,修改bert的optimization文件,如下:

tvars = tf.trainable_variables() grads = tf.gradients(loss, tvars)

(增加不同的学习率,且仅对低学习率进行裁剪) (grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)

new_grads = [] for i in range(len(tvars)): grad = grads[i] var = tvars[i] var_name = var.name

print("var_name:"+var_name)

  if "fast_lr" in var_name:
      print("fast_lr layer: " + var_name)
      grad = grad * 100
  new_grads.append(grad)

train_op = optimizer.apply_gradients( zip(new_grads, tvars), global_step=global_step)

hantaozi commented 4 years ago

请问楼主有解决方案了吗,我这边也遇到了这个问题

已经解决,修改bert的optimization文件,如下:

tvars = tf.trainable_variables() grads = tf.gradients(loss, tvars)

(增加不同的学习率,且仅对低学习率进行裁剪) (grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)

new_grads = [] for i in range(len(tvars)): grad = grads[i] var = tvars[i] var_name = var.name

print("var_name:"+var_name)

if "fast_lr" in var_name: print("fast_lr layer: " + var_name) grad = grad * 100 new_grads.append(grad)

train_op = optimizer.apply_gradients( zip(new_grads, tvars), global_step=global_step)

你好,可以帮详细解释下吗,效果如何?

remonly commented 4 years ago

就是修改bert框架下的梯度更新方法,让需要增大学习率的网络层的梯度乘一定倍数。至于效果的话,我这里没什么提升。

hantaozi commented 4 years ago

fast_lr是在哪设置的呢

remonly commented 4 years ago

你需要增大学习率的网络层的名称

remonly commented 4 years ago

不好意思,麻烦删除下邮件信息,谢谢……