-
【train】Epoch: 10/10 Step: 668/670 loss: 0.00103
【train】Epoch: 10/10 Step: 669/670 loss: 0.00121
【train】Epoch: 10/10 Step: 670/670 loss: 0.00136
[eval] precision=0.0341 recall=0.5020 f1_score=0.0639…
-
-
python src/run_transformer_ner.py \
--model_type xlnet \
--pretrained_model xlnet-base-cased \
--data_dir ./test_data/conll-2003 \
--new_model_dir ./new_bert_ner_model \
…
-
您好,我在自己的数据集上(大概15w训练句子,都是非嵌套实体),比较了以下两个模型:
(1)BERT+双仿射NER:双仿射NER部分用的是您提供的代码
(2)BERT+CRF:即BERT后面接一个全连接和CRF层
前处理和BERT都是同样的设置,在测试集(约5k句子)上biaffine-ner比BERT-CRF效果低了1.7%,P和R值都降低了,尤其是R值,降低了2.5%,我怀疑是没有加负采…
-
I'm sorry to bother you but I couldn't replicate same scores on en_conll2003 dataset.
I only reproduced to **92.12**, **1.4 lower** than yours.
I check my dataset and ensure the labels are the same …
-
Hi, I'm trying to extract bert features by `extract_bert_features.sh`. I find that the token features are extracted based on a document-level, which generates embeddings based on a sequence of sentenc…
-
Goal: better feature/model discoverability for the GluonNLP website
### Side Navigation Bar
* Installation
* Models
* Tutorials
* Demos (in the future)
* API Documentations
* Community…
-
大佬,请问如何快速使用你们之前的模型[W2NER](https://github.com/ljynlp/W2NER)中提供的处理好的中文数据集,如Resume-zh,ontonote4.0等中文数据集,我直接运行并不能跑通,不知道哪里需要做修改?
-
Hi Xiaoya,
Sorry to bother you again. I have some questions when reproduce the nested NER task using your model. (#10)
**Env**: Windows server 2016, 512G RAM, 8 P100
torch version: 1.1.0
other…
-
Hi and thanks for putting this code up! Is there a way to run the model with only fixed word embeddings, like glove, fasttext etc., but without bert?