Open Hrren opened 3 years ago
I came into the same issuse, I used the models in t1
and _t1_final
respectively and got different results on a single image, it seemed that only the model in t1_final
can detect unknowns while in #8 , the author said model in t1
_ was used in evaluation. I am also confused.
result on t1
result on _t1_final
_
Hi, @luckychay
Could you share a visual tutorial?
I use the # metrics.json file provided by the author, and the command is as follows:
python /home/jar/CL/OWOD-master/tools/visualize_json_results.py --input /home/jar/CL/OWOD-master/models_backup/t1_only_thresh/metrics.json --output /home/jar/CL/OWOD-master/display --dataset t1_voc_coco_2007_train
The following error is displayed:
Is there any error somewhere?
Thanks!
@JosephKJ@luckychay Hello, I would like to ask how the WI value in the result table is calculated. In the log.txt:d2.evaluation.pascal_voc_evaluation INFO: Wilderness Impact: {0.1: {50: 0.017806111233238074}, 0.2: {50: 0.028345724907063198}, 0.3: {50: 0.038800776824728926}, 0.4: {50: 0.0477657935285054}, 0.5: {50: 0.046036375796533344}, 0.6: {50: 0.03933635758017955}, 0.7: {50: 0.04479009962816838}, 0.8: {50: 0.04795684951624818}, 0.9: {50: 0.04991899036411699}}. Thanks!
Thank you for your reply @JosephKJ Here are two questions I still in trouble: When I finish task2 training, is the model_final.pth in t2_final works better than that in t2_ft? I use the model_final.pth in these two folders , and the test result is different, why this happen? and WI in paper refers to the "Wilderness Impact:{0.8:{50:xxxxxx}}" or others?
Originally posted by @Hrren in https://github.com/JosephKJ/OWOD/issues/60#issuecomment-927237881