Closed zhouguiyun-uestc closed 2 years ago
It is true that is not ideal as it is now, there is some data dependency that is ignored. On the other hand, the red-black checkerboard approach does not have this issue, but the propagation is slower and it requires more iterations. Besides, the worst case scenario now is that the value from the previous iteration is used and not from the current iteration, which is basically the same as the checkerboard case. But most of the time, I think 90%, it will do a proper iteration using the last estimate, because the length of the zig-zag lines are longer then the number of threads.
However I am always open for suggestions. If you see a better way to do this (or anything else) without speed penalty I'd be more than happy to listen and implement. In fact this is the entire goal of the project, for everybody to collaborate and improve its performance. The more brains thinking at the problem the better.
Thanks for the quick response. I will look at the code further and try to figure out why the data dependency does not affect the correctness of the propagation.
Possible enhancement of OpenMVS has been mentioned in #438. I think for dense point cloud generation, OpenMVS may also consider other open-source implementations such as Gipuma.
By looking at the code, it can be seen that for each image whose depth map is to be computed, the pixels are re-arranged in an array
coords
following a zigzag order (DepthEstimator::MapMatrix2ZigzagIdx
). Pixels are then taken from coords and processed by multiple threads in parallel (EstimateDepthMapTmp
). Thecoords
array seems to store the pixels in strides and each stride holds the pixesl like the following picture (Fig. 5b, in Christian BailerManuel FinckhHendrik P. A. Lensch, 2012, Scale Robust Multi View Stereo, ECCU).In the comment of the function
DepthEstimator::MapMatrix2ZigzagIdx
, the follwoing 2D pixels 1 2 3 4 5 6 7 8 9 are rearranged in an array 1 2 4 7 5 3 6 8 9.My question is when pixels 124 are processed in parallel, there should be a data dependency issue here. For example, when pixel 2 is processed, it needs to readthe depth information of pixel 1, but the depth informatio of pixel 1 may be simultaneously updated by another thread. In other literature, a red-black tree is used to avoid this issue. However, it looks like this is not the strategy used by OpenMVS.
Could any one please explain how the PatchMatch Stereo propagation procedure is paralleled to avoid data dependency?
Thanks!