Open youmi-zym opened 6 years ago
Hi, @youmi-zym It seems that there is something wrong to your error map. These are two my results on KITTI2015 val set.
@IceTTTb Thanks for your advice. To be actually, I use the matlab toolkit from the KITTI website. and blow is the main part of the code I have rewritten.
for i = train_val_list
D_est=disp_read(fullfile(train_root, iids(i).name));
D_gt=disp_read(fullfile(gt_root, iids(i).name));
rgb=imread(fullfile(img_root, iids(i).name));
rgb=double(rgb)/255.0;
d_err = disp_error(D_gt,D_est,tau);
D_err = disp_error_image(D_gt,D_est,tau);
D_est_color=disp_to_color(D_est);
D_gt_color=disp_to_color(D_gt);
save_path=fullfile(train_root,['../error/',iids(i).name]);
imwrite([rgb;disp_to_color([D_est;D_gt]);D_err],save_path);
end
As you can see, disp_to_color([D_est;D_gt]), the max disp to scale the color map is calculated from ground-truth and estimated disparity. Therefore, the processions I have taken are:
I'm wondering are there any errors in submission.py or environment I set or some steps I missed.
@youmi-zym Hi, I guss there may be some errors in the third step in your processions. I feed the results from submission.py to demo.m directly, and the error maps seem right.
@IceTTTb It meas that you got the similar map as I show above? Well, in case there are some mistakes I forgot, I will list the code and environment here:
pytorch 0.4.0 torchvision 0.2.0 python 3.5
disp_read.m
function D = disp_read (filename)
% loads disparity map D from png file
% for details see readme.txt
I = imread(filename);
D = double(I)/256;
D(I==0) = -1;
disp_write.m
function disp_write (D,filename)
% saves disparity map D to png file
% for details see readme.txt
D = double(D);
I = D*256;
I(D==0) = 1;
I(I<0) = 0;
I(I>65535) = 0;
I = uint16(I);
imwrite(I,filename);
I have rewritten the disp_to_color.m in Python, welcome issues for somebody else, I will try to make the script running fast.
https://github.com/passion3394/PSMNet_CPU/blob/master/disp_to_color.py
Hi, thanks for sharing your code. I have some questions need your help. my environment: pytorch 0.4.0 torchvision 0.2.0
I just downloaded your pretrained params and cloned your code, then executed this command
python submission.py --datapath [kitti_scene_flow_path]/testing/ --loadmodel pretrained_model_KITTI2015.tar
without any fine-tuning. Then, I used the disp_read.m and disp_write.m in matlab code to rewrite the result. Finally, I submit to the KITTI website to evaluate. Blow is the result,You can find that there is 0.6% difference between us. From the result picture, it almost similar, but my error map is shallow than you in color.
I also executed this command
python submission.py --datapath [kitti_scene_flow_path]/training/ --loadmodel pretrained_model_KITTI2015.tar
and then used the matlab code to evalute the result, I recorded every image's disparity error in exp.txt file. the average disparity error is 0.95%. And I also gave one of them blow. The error map is shallow as the testing result. I think there must be some mistakes made, please check, thanks a lot! exp.txt