Open mmdzzh opened 4 years ago
Hi, @dingjiansw101 , could you please add this script for evaluating the recall of val set?
@dingjiansw101 Hi, AerialDetection only output .pkl results, which DOTA_devkit doesn't support. Is there anyway to transform .pkl to supported format?
@lingdiandeshui Hello, I met the same problem,how did you solve it?
@dingjiansw101 Hi, AerialDetection only output .pkl results, which DOTA_devkit doesn't support. Is there anyway to transform .pkl to supported format?
you have to run the parse_results.py file to produce the result you want.
@Miki-lin @dingjiansw101 Hi,Please tell me how to calculate the map value on https://github.com/CAPTAIN-WHU/DOTA_devkit. I want to know what file forms detpath, annopath, and imagesetfile correspond to when I evaluate with dota_evaluation_task1.py.thank you
@heyun1994
I solved this. It is a really annoying process...
There are some places in evaluation script in DOTA_devkit should be modified, but before that you may do a lot of preparation work.
So for your question, detpath
is path to your detection results, you can get them following tutorial in this git repo (GETTING_STARTED.md
).
annopath
is path to directory where you save the ground truth file. You can get them by using split script, just follow the tutorial.
imagesetfile
is a file containing all images' names, each line is an image file name.
For example, I have the following files:
dota1_1024
├── test1024
│ ├── DOTA_test1024.json
│ └── images
└── train1024
| ├── DOTA_train1024.json
| └── images
│ └── labelTxt
└── val1024
├── DOTA_val1024.json
└── images
└── labelTxt
Notice that I split the train and val set into two different dirs. If you want to split data like me, you may change the split script 'prepare_dotaxx.py'.
annopath
is the path to labelTxt
dir. If you want to eval val set, the annopath
is the path to labelTxt
in val1024
. And imagesetfile
should contain all images' names in val1024/images
dir.
Notice that eval script use image name with no extension, you should cut the ext name off.
detpath
is path to detection results after parse work.
In addition to this, you also need to change the config file, find the data
dictionary, change the val
part in it like this:
val=dict(
type=dataset_type,
ann_file=data_root + 'val1024/DOTA_val1024.json',
img_prefix=data_root + 'val1024/images',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=False,
# with_crowd=False,
# with_label=True),
with_label=False,
test_mode=True),
Also, there may be many path problems in other files, you should be careful. After this, you can use test.py
to get the results of val set.
So totally, if you want to test val set, you should
test.py
to get the detection resultsparse_results.py
script to transform the format of the resultsHope these comments can help you.
@YangLinzhuo Your answer is very helpful to me, thank you .Do you know how to convert result.pkl into a file that can measure the map value if the data set is not split .
@YangLinzhuo Your answer is very helpful to me, thank you .Do you know how to convert result.pkl into a file that can measure the map value if the data set is not split .
@heyun1994
For parsing result.pkl
, just use parse_results.py
.
I'm not sure what you mean dataset is not split... So I suppose you want to evaluate trainval
dataset.
If you want to evaluate it, you need to change annopath
and imagesetfile
parameters.
For example:
dota1_1024
├── test1024
│ ├── DOTA_test1024.json
│ └── images
└── trainval1024
| ├── DOTA_train1024.json
| └── images
│ └── labelTxt
annopath
is /path/to/trainval1024/labelTxt
imagesetfile
is /path/to/trainval1024/image_names.txt
, the file here saves all images' names in images folder.
detpath
is /path/to/parsed/result
, which is the folder containing parsed results. The names of files in this folder is Task1_classname.txt
I listed steps in my last comment, you can skip the first step, and do following steps.
My eval script is like this:
def voc_eval(detpath, # detpath is the path for 15 result files
annopath, # annotations file path
imagesetfile, # image files path
classname,
ovthresh=0.5,
use_07_metric=False):
"""rec, prec, ap = voc_eval(detpath,
annopath,
imagesetfile,
classname,
[ovthresh],
[use_07_metric])
Top level function that does the PASCAL VOC evaluation.
detpath: Path to detections
detpath.format(classname) should produce the detection results file.
annopath: Path to annotations
annopath.format(imagename) should be the xml annotations file.
imagesetfile: Text file containing the list of images, one image per line.
classname: Category name (duh)
cachedir: Directory for caching the annotations
[ovthresh]: Overlap threshold (default = 0.5)
[use_07_metric]: Whether to use VOC07's 11 point AP computation
(default False)
"""
# first load gt
with open(imagesetfile, 'r') as f:
lines = f.readlines()
imagenames = [x.strip() for x in lines]
imagenames_no_ext = [name.split('.')[0] for name in imagenames] # remove ext name
recs = {}
for i, imagename in enumerate(imagenames_no_ext):
recs[imagename] = parse_gt(annopath.format(imagename))
# ...
# other codes are omitted
# read dets from Task1* files
detfile = detpath.format(classname)
with open(detfile, 'r') as f:
lines = f.readlines()
# other codes are omitted
def main():
args = parse_args()
detpath = os.path.join(args.detpath, "{}.txt")
annopath = os.path.join(args.annopath, "{}.txt")
imagesetfile = args.imagesetfile
# For DOTA-v1.0
classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court',
'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter']
classaps = []
map = 0
for classname in classnames:
print('classname:', classname)
rec, prec, ap = voc_eval(detpath,
annopath,
imagesetfile,
classname,
ovthresh=0.5,
use_07_metric=True)
map = map + ap
print('ap: ', ap)
classaps.append(ap)
map = map/len(classnames)
print('map:', map)
classaps = 100*np.array(classaps)
print('classaps: ', classaps)
if __name__ == '__main__':
main()
Hope this helpful to you.
For evaluation on test set, you should refer to the evaluation server https://captain-whu.github.io/DOTA/evaluation.html. Also we provide the evaluation code in https://github.com/CAPTAIN-WHU/DOTA_devkit. You can use the code to evaluation in val set.