-
> Hello, thanks for raising this question.
>
> We used pre-trained word embeddings (Glove and ELMo). You can use the script `scripts/data_setup.sh` to download them and place them in a `data` folde…
-
-
I using a new dataset , I had convert it in the form of CoNLL2012 except the pos information and pare tree information . I treat them as missing information case by filling with replace token as fol…
-
## 问题1*语义角色关系不能按分词的词语按列输出*
## 代码片段
# -*- coding: utf-8 -*-
import os
from pyltp import SentenceSplitter
from pyltp import Postagger
from pyltp import Segmentor
from pyltp import NamedEntity…
-
I unable get the results specified in your paper using the train/test/dev split using conll2012 v4
Which version did you use ?
-
Hi,
can we create our own training data.?
In the code where we are giving training data?
can i look the dataset?
Here we are giving pre-trained models?
I want to add some more data to the existin…
-
See allenai/bilm-tf#59
We don't apply any formatting for numbers, we use the same tokenization as the one provided by CoNLL2012 dataset, so no clue for the moment.
-
Hi,
I am facing some issue with preprocessing ontonotes.
First i used the script of conll-2012 shared task to generate the `*_gold_conll` files which contains the annotations. For example:
`conl…
-
Hi all,
I have tried to reproduce the baseline models and ELMO feeding models by running the configurations provided in the trianing_config. The three experiments I tried: Textual Entailment (SNLI)…
-
Add an importer that reads plain text and uses the Stanford Core NLP coreference resolver, allow user to correct it with the GUI.