Closed xingranzh closed 4 years ago
Hi @xingranzh
Thanks for the interest in our work !!
In Differentiable PatchMatch code, we allow a pixel in image1 to be matched to any pixel in image2, thus the search space is 2D.
The PatchMatch code in deeppruner directory is particularly for the DeepPruner model (for stereo-matching on rectified images) and therefore the search space is 1D.
Thats the reason for using grid_sample
in DifferentiabelPatchMatch and torch.gather
in deeppruner patchmatch code.
That said, we did notice that grid sample does consume significant memory, and that's why we didn't store the result of grid_sample
into any variable. I think linearizing the input and grid to 1D and then applying torch.gather
instead of grid_sample
seems to be a good idea to reduce some memory requirements.
Hello,
I am using this code, it is a nice job!
I find out that the F.grid_sample function takes lots of GPU memories because it creates a copy of tensor in every iteration.
If I replace F.grid_sample in line 135 135 by torch.gather like the 1 dimension case in line 111 111,
Does it still create many copies of tensors?
Or do you have any better options to solve this question? Because in my understanding, these copies are not neccessary, they are only for updating the offsets.
Thanks.