Open liuqk3 opened 6 years ago
The model didn't check if a tensor is contiguous. This will result in wrong crops as you mentioned.
The way to fix this:
# tensor = torch.from_numpy(arr) # Replace to
tensor = torch.from_numpy(arr).contiguous()
# OR
# image_data = np.asarray(image_data, dtype=np.float32) # Replace to
image_data = np.ascontiguousarray(image_data, dtype=np.float32)
Then I got the right crop:
tensor([[[[67., 67., 66., 66., 64.],
[67., 67., 66., 64., 64.],
[67., 67., 65., 64., 65.],
[67., 65., 65., 65., 65.],
[64., 64., 64., 64., 64.],
[63., 63., 63., 63., 63.],
[62., 62., 62., 62., 62.]],
[[65., 65., 64., 64., 64.],
[64., 64., 63., 64., 64.],
[64., 64., 65., 64., 65.],
[64., 65., 65., 65., 65.],
[64., 64., 64., 64., 64.],
[63., 63., 63., 63., 63.],
[62., 62., 62., 62., 62.]],
[[53., 53., 52., 52., 52.],
[55., 55., 54., 54., 54.],
[55., 55., 55., 54., 55.],
[55., 55., 55., 55., 55.],
[54., 54., 54., 54., 54.],
[53., 53., 53., 53., 53.],
[52., 52., 52., 52., 52.]]]], requires_grad=True)
tensor([[[[67., 67., 66.],
[67., 67., 66.],
[67., 67., 65.]],
[[65., 65., 64.],
[64., 64., 63.],
[64., 64., 65.]],
[[53., 53., 52.],
[55., 55., 54.],
[55., 55., 55.]]]], grad_fn=<CropAndResizeFunction>)
Sorry for the mistakes.
@longcw Thank you very much! You have solved my problem :)
I write a simple test code
code 1
as follows. I first generate a 3D ndarray randomly, and add a new dimensionality to represente the batch_size dimension. The box is set to [0, 0, 3, 3], and the croped width and heght of RoIAlign are set to 3 and 3, respectively. And the output is what I want.code 1
output 1
Next, I modify the
code 1
, and I getcode 2
. What I did is just change the generation of image dada. Instead generate randomly, I load a image from the disk, and pick a small patch from the image. But there are some thing wrong.code 2
and output are as follows:code 2
output 2
As you can see, the out put is not what I want and I can not figure out how RoIAlign works here. So I am wondering why this happening? Can anyone tell me please?