ymy-k / DPText-DETR

[AAAI'23 Oral] DPText-DETR: Towards Better Scene Text Detection with Dynamic Points in Transformer
Other
170 stars 22 forks source link

evaluation on CTW1500 #28

Open D641593 opened 1 year ago

D641593 commented 1 year ago

Hi, thanks for the great work. I have some issues with the evaluation performance on the CTW1500 dataset. If I evaluate using the code from this github, the performance is the same as reported. However, when I try to change the output format to meet the CTW-1500 official test code, the performance drops a lot.

I add the below code in demo/demo.py

def write_txt(pred, fname):
    print(pred["instances"])
    polys = pred["instances"].polygons
    txtfname = fname[:-3]+"txt"
    with open(txtfname, 'w', encoding='utf-8') as wf:
        for p in polys:
            line = np.round(p.detach().cpu().numpy()).astype(np.int32).tolist()
            line = list(map(str,line))
            line = ",".join(line)
            wf.write(line + '\n')

and edit the output as below code in demo/demo.py

            if args.output:
                if os.path.isdir(args.output):
                    assert os.path.isdir(args.output), args.output
                    out_filename = os.path.join(args.output, os.path.basename(path))
                else:
                    assert len(args.input) == 1, "Please specify a directory with args.output"
                    out_filename = args.output

                write_txt(predictions, out_filename) # add this line
                visualized_output.save(out_filename)

The result of CTW1500 official test code is ( Prec. / Recall / F1-score : 91.4 / 79.0 / 84.7 ) There is a gap with the data in the report (Prec. / Recall / F1-score : 91.7 / 86.2 / 88.8 ) Are there any mistakes in my edit and inferences?

ymy-k commented 1 year ago

参考evaluation部分代码