Open xiulinyang opened 5 months ago
Hello @xiulinyang it is hard to tell without a runnable example to reproduce the error. But I would suspect the error lies somewhere in the way you read the sentences.
Some ideas:
Hi @alanakbik, thanks a lot for your prompt reply! :)
I tried to use ColumnCorpus in Flair (as seen below) but the issue remains. Is it possible for me to send you the code, training log and the data for reproduce the error? Or is there anything that I misunderstood in the code below? Thanks!
def predict(self, in_path=None, as_text=False):
model = self.model
if as_text:
data = in_path
#data = (data + "\n").replace("<s>\n", "").replace("</s>\n", "\n").strip()
else:
data = io.open(in_path,encoding="utf8").read()
true_tags = []
data = data.strip() + "\n" # Ensure final new line for last sentence
for line in data.split("\n"):
if "\t" in line:
true_tags.append(line.split("\t")[7])
columns = {0: "text", 1: "upos", 2: "deprel", 3: "attach", 4: "arg1", 5: "arg2", 6: "arg3", 7: "ner"}
corpus: Corpus = ColumnCorpus(data_folder, columns, train_file="train.biodep.sample.tab",
test_file="test.biodep.sample.tab", dev_file="dev.biodep.sample.tab", )
output = []
toknum = 0
model.predict(corpus.dev, force_token_predictions=True, return_probabilities_for_all_classes=True))
for sentence in corpus.dev:
for token in sentence:
text = token.text
if len(token.labels) > 6:
ner_tag = token.labels[6].value
ner_value = token.labels[6].score
else:
ner_tag = "O"
ner_value = "1.0"
corr = 'T' if ner_tag == true_tags[toknum] else 'F'
output.append(text+'\t'+true_tags[toknum]+'\t'+ner_tag+ '\t'+ corr +'\t' + ner_value)
toknum +=1
partition = "test" if "test" in in_path else "dev"
with io.open(script_dir + TRAIN_PATH +os.sep + "flair-"+partition+"-pred.tsv",'w',encoding="utf8",newline="\n") as f:
f.write("\n".join(output))
Hi, just a quick update. I tried the other task (upos tagging) but the result is still weird. I created a Google Colab script to replicate the experiment. It will only take 5 minutes to run. I attached the data (it only contains 100 sentences). It would be very much appreciated if you could offer some insight into potential problems. Thank you! :) tagger_new.zip
Hi @xiulinyang I am sorry, but that colab script contains too much code that doesn't seem to be related to flair. The concept of a minimal reproducible example is that you use as little code as possible to show that the error exists. I am not fully grasping what the issue is, nor am I convinced that this is a problem related to flair, as there are so many other factors that I cannot judge if done right or not.
Hi @helpmefindaname,
Thank you very much for your reply! Sorry, I have been debugging for a while and the code was a mess. Now I have cleaned the code and you can run the experiment again with the data from the tagger folder . It contains 500 sentences for training and 100 sentences from these 500 examples for test. (I tried to downsample the data, but with smaller size, the model will only give 0 accuracy score).
The main problem I have right now is that the model won't give consistent predictions. After training, a test.tsv
file will be generated automatically, and when I use tagger.evaluate()
, another file named prediction.txt
will be generated, too. I also manually write a .predict()
method which generates flair-test-pred.txt
. When checking these three files, they always give me different results. The difference seems trivial (the difference in terms of accuracy among these three files is less than 2%, but when I trained with the whole dataset containing ~10k examples, the difference was very large (0.88 vs 0.3).
Thanks!
Hi, could you please offer some help? Thanks!
hi @xiulinyang There is still no minimal example to reproduce, so I am not sure what help I can provide. I would still suggest, that you take your example and try to strip of everything that is not related to flair or to demonstrate this issue.
@helpmefindaname Hi, sorry, I have removed the unrelated code and only kept what is relevant to flair. I hope this time it works. Thanks!
Question
Hi,
I'm training a SequenceTagger to do the NER task using my customed dataset with the customed features. After training was done, I got a file named test.tsv which is the prediction of the test split. However, when I loaded the trained model (final-model.pt) and did inference on the same test data, I got way lower results (0.86 vs 0.64 in accuracy).
Here is the prediction function I'm using. I checked the sents list, and all the labels are correctly added to each token. During training, I stacked all the features - should I do the same during prediction? I find the main issue is that the model does not understand the labels added - I have a specific label named
__TARGET__
which is the signal for the model to give predictions on specific tokens, but it seems that the model ignores the tag. It would be very appreciated for any suggestions. Thanks!