Open ckwllawliet opened 7 years ago
@ckwllawliet I also found the same problem after I updated torch. When I was using debug mode, I found that the matching cost of initial disparity map in regions that width=1 : disp_max
equal NaN.
Now I realize that this problem is caused by the how min()
functions dealing with NaN values. Obviously, it has changed in new torch version. After I removed all NaN values before using min()
function. The result became better but still worse than before. Could you help us fix this problem? @jzbontar
I solved the problem. In main.lua file, change
vols = torch.CudaTensor(2, disp_max, x_batch:size(3), x_batch:size(4)):fill(0 / 0) vol = torch.CudaTensor(1, disp_max, output:size(3), output:size(4)):fill(0 / 0)
to vols = torch.CudaTensor(2, disp_max, x_batch:size(3), x_batch:size(4)):fill(1.0 / 1.0) vol = torch.CudaTensor(1, disp_max, output:size(3), output:size(4)):fill(1.0 / 1.0)
This change sets default values for occluded pixels.
@jzbontar Hello, I met some problems when I tried to reproduce your work. When I tried to compute the error rate as it in README, for example, by typing $ ./main.lua kitti fast -a test_te -net_fname net/net_kittifast-a_train_all.t7, I got a much greater error rate than it written in the paper, about 11%. I don't know what's the problem, and the error rate I tested are all greater than the results in paper. I also tried to test on the network you have trained. However, the results are the same as I got before. And the time I tested is less than yours. It just like I skipped some steps, while I didn't find the problem. I want to ask you if you have any idea about it, or there are some thing I need to consider when I'm testng? I also want to ask if there are any persons met the same problem as me? Looking forward for your reply, thanks!