Closed Shiina18 closed 1 year ago
Hi there~ Sorry for the late response.
We provided inference.py
as an example to explain how the model could be used to generate event records, since a lot of people requested this feature. So it's actually not a quite right way to reproduce the results as reported in our paper.
Inside dee_task.predict_one
, when calling convert_string_to_raw_input
, the concatenated string will be split into sentences by the sent_seg
function in dee/helper/__init__.py
, which may not provide the same sentences as in the original dataset.
The results you mentioned is quite interesting though, we didn't try this before.
If you want to reproduce the results as reported in our paper, we suggest following our instruction in README.md
to re-train the model, or load the test set and call the default dee_task.eval
function directly.
In addition, if you hacked the source code to bypass the sent_seg
, the upper bound of entity extraction will decrease rapidly since the default maximal sequence length is 128, leading to a performance decline.
If you have further questions, feel free to leave a message.
Thanks for your reply. I didn't hack the dee_task.predict_one
function (so sent_seg
also applies as usual), but just imitated the logic and replaced inference.py
with the code above. My result can be reproduced in a few minutes. I will take some time to see how dee_task.eval
works and goes.
Idea sharing While sharing what you want to do, make sure to protect your ideas.
Problems Used the setting and task dump given in the readme, and inference.py as follows, but got a low result: f1 0.6856. In detail, the precision is as expected, but the recall is way low.
Others The code below is rather casual and informal, but should work. It concats all sentences in one doc as a string, then run
dee_task.predict_one
, and finally measure it with dee_metric.py.