Closed leaf1170124460 closed 8 months ago
all of then have already been included in the repo, pls check it.
On Sun, Jan 7, 2024, 7:03 PM Fan Chengxiang @.***> wrote:
Hi, @xuebinqin https://github.com/xuebinqin, @DengPingFan https://github.com/DengPingFan, @HUuxiaobin https://github.com/HUuxiaobin, @PINTO0309 https://github.com/PINTO0309 and @16673161214 https://github.com/16673161214. Thanks for your work on DIS.
The paper uses six evaluation metrics (maximal F-measure, weighted F-measure, mean absolute error, structural measure, mean enhanced alignment measure and human correction efforts) to evaluate the performance of the model, but I only found the HCE evaluation code in the repo. Could you provide the evaluation codes for other five metrics? So that we can be consistent with the paper during the evaluation.
Thank you for your time and effort in this project. Looking forward to your response.
— Reply to this email directly, view it on GitHub https://github.com/xuebinqin/DIS/issues/106, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORNYIJRPNP35UNKPHFTYNNOXFAVCNFSM6AAAAABBQZY77CVHI2DSMVQWIX3LMV43ASLTON2WKOZSGA3DSNJSGQ3DKMY . You are receiving this because you were mentioned.Message ID: @.***>
Thank you for your reply. But could you provide a more detailed answer? For example, which file are these codes in? How to use them? I can not find all the metrics codes by searching the keywords, the same situation as mentioned in the issue.
https://github.com/mczhuge/SOCToolbox
[image: image.png]
On Mon, Jan 8, 2024 at 10:18 PM Fan Chengxiang @.***> wrote:
Thank you for your reply. But could you provide a more detailed answer? For example, which file are these codes in? How to use them? I can not find all the metrics codes by searching the keywords, the same situation as mentioned in the issue https://github.com/xuebinqin/DIS/issues/68.
— Reply to this email directly, view it on GitHub https://github.com/xuebinqin/DIS/issues/106#issuecomment-1882473627, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORMZO4IMHIHP346QH2LYNTOK3AVCNFSM6AAAAABBQZY77CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBSGQ3TGNRSG4 . You are receiving this because you were mentioned.Message ID: @.***>
-- Xuebin Qin PhD Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage: https://xuebinqin.github.io/
Thank you for your reply. Now it can be evaluated normally.
Hi, @xuebinqin, @DengPingFan, @HUuxiaobin, @PINTO0309 and @16673161214. Thanks for your work on DIS.
The paper uses six evaluation metrics (
maximal F-measure
,weighted F-measure
,mean absolute error
,structural measure
,mean enhanced alignment measure
andhuman correction efforts
) to evaluate the performance of the model, but I only found the HCE evaluation code in the repo. Could you provide the evaluation codes for other five metrics? So that we can be consistent with the paper during the evaluation.Thank you for your time and effort in this project. Looking forward to your response.