Open Rover912 opened 2 years ago
Actually, the evaluation metrics process can follow the code in doc2edag (format the output the same as in the code in doc2edag). I'm sorry that the missing part of the code makes you confused, and we will update this part as soon as possible in the future.
Yes, the metrics are same. But the eval function should return total_event_decode_results
and total_eval_res
based on your code. How can I get these from the eval function in your code? It does not return anything.
def eval(self, features, dataset, use_gold_span=False, heuristic_type=None,
dump_decode_pkl_name=None, dump_eval_json_name=None, eval_process = None):
self.logging('=' * 20 + 'Start Evaluation' + '=' * 20)
if dump_decode_pkl_name is not None:
dump_decode_pkl_path = os.path.join(self.setting.output_dir, dump_decode_pkl_name)
self.logging('Dumping decode results into {}'.format(dump_decode_pkl_name))
else:
dump_decode_pkl_path = None
if os.path.exists(dump_decode_pkl_path) and eval_process:
total_event_decode_results = default_load_pkl(dump_decode_pkl_path)
else:
total_event_decode_results = self.base_eval(
dataset, DEETask.get_event_decode_result_on_batch,
reduce_info_type='none', dump_pkl_path=dump_decode_pkl_path,
features=features, use_gold_span=use_gold_span, heuristic_type=heuristic_type,
)
self.logging('Measure DEE Prediction')
if dump_eval_json_name is not None:
dump_eval_json_path = os.path.join(self.setting.output_dir, dump_eval_json_name)
self.logging('Dumping eval results into {}'.format(dump_eval_json_name))
else:
dump_eval_json_path = None
I guess the total_event_decode_results
can be obtained from the variable with the same name in the eval function, but how can I get total_eval_res
?
Yes, the metrics are same. But the eval function should return
total_event_decode_results
andtotal_eval_res
based on your code. How can I get these from the eval function in your code? It does not return anything.def eval(self, features, dataset, use_gold_span=False, heuristic_type=None, dump_decode_pkl_name=None, dump_eval_json_name=None, eval_process = None): self.logging('=' * 20 + 'Start Evaluation' + '=' * 20) if dump_decode_pkl_name is not None: dump_decode_pkl_path = os.path.join(self.setting.output_dir, dump_decode_pkl_name) self.logging('Dumping decode results into {}'.format(dump_decode_pkl_name)) else: dump_decode_pkl_path = None if os.path.exists(dump_decode_pkl_path) and eval_process: total_event_decode_results = default_load_pkl(dump_decode_pkl_path) else: total_event_decode_results = self.base_eval( dataset, DEETask.get_event_decode_result_on_batch, reduce_info_type='none', dump_pkl_path=dump_decode_pkl_path, features=features, use_gold_span=use_gold_span, heuristic_type=heuristic_type, ) self.logging('Measure DEE Prediction') if dump_eval_json_name is not None: dump_eval_json_path = os.path.join(self.setting.output_dir, dump_eval_json_name) self.logging('Dumping eval results into {}'.format(dump_eval_json_name)) else: dump_eval_json_path = None
I guess the
total_event_decode_results
can be obtained from the variable with the same name in the eval function, but how can I gettotal_eval_res
?
Hi, there. I've adapted the eval part from Doc2EDAG. Here's the repo: Link . Still, my version of codes may have some potential problems and lack of results postprocessing so that I can't reproduce the performance as reported in the paper. It might be a reference before the authors update their codes.
Yes, the metrics are same. But the eval function should return
total_event_decode_results
andtotal_eval_res
based on your code. How can I get these from the eval function in your code? It does not return anything. https://github.com/HangYang-NLP/DE-PPN/blob/d8d357c1e8503d2525e7af5ea1f6aac613f90e53/DEE/DEE_task.py#L393def eval(self, features, dataset, use_gold_span=False, heuristic_type=None, dump_decode_pkl_name=None, dump_eval_json_name=None, eval_process = None): self.logging('=' * 20 + 'Start Evaluation' + '=' * 20) if dump_decode_pkl_name is not None: dump_decode_pkl_path = os.path.join(self.setting.output_dir, dump_decode_pkl_name) self.logging('Dumping decode results into {}'.format(dump_decode_pkl_name)) else: dump_decode_pkl_path = None if os.path.exists(dump_decode_pkl_path) and eval_process: total_event_decode_results = default_load_pkl(dump_decode_pkl_path) else: total_event_decode_results = self.base_eval( dataset, DEETask.get_event_decode_result_on_batch, reduce_info_type='none', dump_pkl_path=dump_decode_pkl_path, features=features, use_gold_span=use_gold_span, heuristic_type=heuristic_type, ) self.logging('Measure DEE Prediction') if dump_eval_json_name is not None: dump_eval_json_path = os.path.join(self.setting.output_dir, dump_eval_json_name) self.logging('Dumping eval results into {}'.format(dump_eval_json_name)) else: dump_eval_json_path = None
I guess the
total_event_decode_results
can be obtained from the variable with the same name in the eval function, but how can I gettotal_eval_res
?Hi, there. I've adapted the eval part from Doc2EDAG. Here's the repo: Link . Still, my version of codes may have some potential problems and lack of results postprocessing so that I can't reproduce the performance as reported in the paper. It might be a reference before the authors update their codes.
Thanks for sharing!
Hi, thanks for sharing the code. I can train the model based on your code and some added part from the doc2edag. But, a missing part in eval function (this func should return something, but it does not in your code) makes the whole evaluation process failed. I changed this function based on my understanding, but unfortunately didn't work. So, could you check if there are some missing parts in the eval code?