Closed fa3ima95 closed 2 months ago
@fa3ima95
The code has caused some incompatibilities due to updates and it is recommended that you try to use the latest code :>
could you please share the link to the latest code ?
and another request, would you mind changing your readme.txt to English language, every time I read it I have to turn it to English :))
@fa3ima95
You can try the latest commit of this repository and the latest commit of the metric library.
I have added a new readme file in English.
thanks million man
could you please help me why I got this error :
model has 32649670 parameters in total Testing on CAMO
Parameters... datapath : CodDataset/test/CAMO mode : test label_dir : Scribble Testing on CHAMELEON
Parameters... datapath : CodDataset/test/CHAMELEON mode : test label_dir : Scribble Testing on COD10K
Parameters... datapath : CodDataset/test/COD10K mode : test label_dir : Scribble usage: eval.py [-h] --dataset-json DATASET_JSON --method-json METHOD_JSON [METHOD_JSON ...] [--metric-npy METRIC_NPY] [--curves-npy CURVES_NPY] [--to-overwrite] [--record-xlsx RECORD_XLSX] [--include-methods INCLUDE_METHODS [INCLUDE_METHODS ...]] [--exclude-methods EXCLUDE_METHODS [EXCLUDE_METHODS ...]] [--include-datasets INCLUDE_DATASETS [INCLUDE_DATASETS ...]] [--exclude-datasets EXCLUDE_DATASETS [EXCLUDE_DATASETS ...]] [--num-workers NUM_WORKERS] [--num-bits NUM_BITS] [--metric-names {mae,em,sm,wfm,bif1,biiou,bikappa,bioa,biprecision,birecall,dice,f1,fmeasure,iou,precision,recall,s pecificity} [{mae,em,sm,wfm,bif1,biiou,bikappa,bioa,biprecision,birecall,dice,f1,fmeasure,iou,precision,recall,specificity} ...]] [--data-type {image,video}] eval.py: error: unrecognized arguments: --record-txt results.txt trained
what should I do?
@fa3ima95
Did you use the latest of this repo?
Your default values of --metric-names
are inconsistent with the latest version:
thanks a million, dude, it works, but there is a question, as mentioned in eval.py file the default names of metrics are these : default=["sm", "wfm", "mae", "fmeasure", "em", "precision", "recall", "msiou"],
but in the implementation of code, the result of metrics are these:
========>> Dataset: COD10K <<======== [Ours] sm: 0.725 wfm: 0.567 mae: 0.052 maxfmeasure: 0.631 avgfmeasure: 0.623 adpfmeasure: 0.631 maxprecision: 0.716 avgprecision: 0.694 adpprecision: 0.681 maxrecall: 1.0 avgrecall: 0.586 adprecall: 0.608 maxem: 0.826 avgem: 0.8 adpem: 0.832 msiou: 0.42
now there is a question, Now which one should be considered as an E-measure (according to my result :maxem , avgem , adpem ) and for F-measure again?
or which version of the file had been updated that only shows these metrics : ["sm", "wfm", "mae", "fmeasure", "em", "precision", "recall", "msiou"],
@fa3ima95 maxem, avgem, and adpem are different versions of E-measure. You can report on all versions or on some specific versions.
The key is to stay consistent with other methods and you can re-evaluate their results to determine which versions to choose.
This issue is stale because it has been open 7 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
would you mind helping , when I run the code I get this error : cannot import name '_TYPE' from 'py_sod_metrics.sod_metrics' . I could not find this py_sod_metric
in metrics folder / extra_metrics.py / line 3 : from py_sod_metrics.sod_metrics import _TYPE, _prepare_data
why I got that error : cannot import name '_TYPE' from 'py_sod_metrics.sod_metrics
thanks