Closed Dimiftb closed 2 years ago
It seems that you canceled the prediction of the model. This command will evaluate the model on the test set.
I had an issue with my runtime and it was cancelling my run. I think I'm on the right track to successfully replicate the results. I'm running 1 12 gb Nvidia GPU for 1.3 hours now, how long should it take approximately?
The code get stuck in the pdb
module (the debugger in python), you may check these lines in the code. It seems that training_state.pt
is missing.
Hi again @wangxinyu0922,
So I managed to find what the issue was. I had to add the folder
en-xlmr-tuned-first_elmo_bert-old-four_multi-bert-four_word-glove_word_origflair_mflair_char_30episode_150epoch_32batch_0.1lr_800hidden_en_monolingual_crf_fast_reinforce_freeze_norelearn_sentbatch_0.5discount_0.9momentum_5patience_nodev_newner5
to folder resource/taggers
and xlm-roberta-large-finetuned-conll03-english
to folder resources
Now I have the same problem as before where the program keeps terminating after 4 mins and 53 seconds of execution and you can see the output below.
^C
is the only thing listed as a reason for end of termination, however it is not me that's terminating the program. I'm using colab, so I can't even terminate the code like that. What could be the cause of the issue? Thanks.
It's strange. I have never used colab before so I do not know the reason. From your log, the program terminated when reading the model, so I suspect if there is a CPU memory limit for colab and it automatically kill the program when reading the model.
Hi @wangxinyu0922
Thanks for your reply. This seems highly unlikely as colab wouldn't just terminate execution due to insufficient resources without an error message informing that are not enough resources to complete the process. Anyway, I will attempt to run the code on my machine to see if I will get the same results and if I do I'll write back for further assistance.
Thank you very much for helping me this far.
@Dimiftb How is your progress on running locally?
Close because of no response from the OP.
Hi there,
So I believe I successfully managed to run your best model on CoNLl, however I was wondering how can I go about getting actual prediction values, e.g. Precision (Accuracy), F1 and Recall?
The current output that I have when running
python train.py --config config/conll_03_english.yaml --test
can be seen below:Click to expand!
``` ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' /content/ACE/flair/utils/params.py:104: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. dict_merge.dict_merge(params_dict, yaml.load(f)) 2021-07-07 10:38:05,848 Reading data from /root/.flair/datasets/conll_03 2021-07-07 10:38:05,848 Train: /root/.flair/datasets/conll_03/train.txt 2021-07-07 10:38:05,848 Dev: /root/.flair/datasets/conll_03/testa.txt 2021-07-07 10:38:05,848 Test: /root/.flair/datasets/conll_03/testb.txt 2021-07-07 10:38:13,533 {b'Thank you.