Closed dfrumkin closed 4 years ago
You can call it with tenDisparity = disparity_refinement(tenImage, tenDisparity)
. The guiding image needs to be four times the resolution of the disparity map. No need to shift or scale the disparity, the refinement network is invariant to shifts and scales. Let me know in case you are running into any issues.
Thank you, Simon! I've played with it a little bit and indeed the difference after scale/shift is negligible. The only catch is that the values have to be non-negative.
Hello Simon! If I want to run your refinement network on a given disparity map that was not obtained from your first network, what are the requirements on the input? Suppose the disparity map is normalized to 0..1, where 1 corresponds to the nearest object and 0 to the farthest one (alternatively 0..255), should I scale it in some way before feeding to the refinement network?