Open 421psh opened 4 years ago
Is your python version 2.7 ?
Is your python version 2.7 ?
I have tried both python versions (2.7 and 3.6), same result.
My code for your model inference:
img_orig = skimage.io.imread('rgb_kitti.png')
lidar = skimage.io.imread('lidar_kitti.png').astype(np.float32)
sparse = lidar * 1.0 / 256.0
binary_mask = np.where(sparse > 0.0, 1.0, 0.0)
img = img_orig.transpose(2, 0, 1)
left = torch.FloatTensor(img.reshape(1, img.shape[0], img.shape[1], img.shape[2])).cuda()
sparse = torch.FloatTensor(sparse.reshape(1, 1, sparse.shape[0], sparse.shape[1])).cuda()
mask = torch.FloatTensor(binary_mask.reshape(1, 1, binary_mask.shape[0], binary_mask.shape[1])).cuda()
model = s2dN(1)
model = nn.DataParallel(model, device_ids=[0])
model.cuda()
state_dict = torch.load('depth_completion_KITTI.tar')['state_dict']
model.load_state_dict(state_dict)
model.eval()
with torch.no_grad():
outC, outN, maskC, maskN = model(left, sparse, mask)
tempMask = torch.zeros_like(outC)
predC = outC[:,0,:,:]
predN = outN[:,0,:,:]
tempMask[:, 0, :, :] = maskC
tempMask[:, 1, :, :] = maskN
predMask = F.softmax(tempMask)
predMaskC = predMask[:,0,:,:]
predMaskN = predMask[:,1,:,:]
pred1 = predC * predMaskC + predN * predMaskN
pred = torch.squeeze(pred1)
pred = pred.data.cpu().numpy()
pred = np.where(pred <= 0.0, 0.9, pred)
pred_show = pred * 256.0
pred_show = pred_show.astype('uint16')
res_buffer = pred_show.tobytes()
img_pred = Image.new("I",pred_show.T.shape)
img_pred.frombytes(res_buffer,'raw',"I;16")
img_pred.save('img_pred.png')
I also have tried different images from Kitti dataset, and predMaskC is always practically empty. As a result there is no impact from Color Pathway Dense Depth in a final one.
The value of pixel in the mask is from 0 to 1, you should multiply 255 for visualization.
The value of pixel in the mask is from 0 to 1, you should multiply 255 for visualization.
The problem is not in visualization, matplotlib allows showing images both in the range of 0 to 255 and 0 to 1. The main issue is that the Mask of Color Pathway is almost all consists of zeros, while the Mask of Normal Pathway consists of ones.
Did anyone face such issue?
Sorry, I don't know. The code I test in my machine is actually fine.
On Mon, Feb 10, 2020 at 10:45 PM 421psh notifications@github.com wrote:
Did anyone face such issue?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/JiaxiongQ/DeepLiDAR/issues/25?email_source=notifications&email_token=AJANJRGL6BV334XK6ECFBG3RCFSAVA5CNFSM4KQ3R3S2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELIYLIQ#issuecomment-584156578, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJANJRBVBTXHD5M6KGCKMD3RCFSAVANCNFSM4KQ3R3SQ .
@JiaxiongQ Thank you for your responsiveness. Finally, after neat preprocessing on the input data and following all requirements, I actually managed to get right Depth Map. Result is great, but I am wondering are you going to adapt the code to python 3 and up to date versions of torch and torchvision?
Great! This is a good suggestion, but we don't have such plan in the near future.
On Tue, Feb 11, 2020 at 6:47 PM 421psh notifications@github.com wrote:
@JiaxiongQ https://github.com/JiaxiongQ Thank you for your responsiveness. Finally, after neat preprocessing on the input data and following all requirements, I actually managed to get right Depth Map. Result is great, but I am wondering are you going to adapt the code to python 3 and up to date versions of torch and torchvision?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/JiaxiongQ/DeepLiDAR/issues/25?email_source=notifications&email_token=AJANJRGIRYIODCPDRQNFJBTRCJ64JA5CNFSM4KQ3R3S2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELL6X2Q#issuecomment-584575978, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJANJRCLUAS5TVRVTBJDO6LRCJ64JANCNFSM4KQ3R3SQ .
Hello. I am trying to implement your code and faced weird behavior of Attention Maps (predMaskC and predMaskN). I am using your pretrained model. I also tried torchvision 0.2.0 and torch 0.4.0, but it didn't help. Do you have any suggestions?
why my result is like this
The version of python should be 2.7
On Thu, May 14, 2020 at 8:06 PM Wen Y.K. notifications@github.com wrote:
Hello. I am trying to implement your code and faced weird behavior of Attention Maps (predMaskC and predMaskN). I am using your pretrained model. I also tried torchvision 0.2.0 and torch 0.4.0, but it didn't help. Do you have any suggestions?
why my result is like this [image: image] https://user-images.githubusercontent.com/36818370/81932377-4fc95000-961e-11ea-8d4b-9f436768cf12.png
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/JiaxiongQ/DeepLiDAR/issues/25#issuecomment-628589785, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJANJRFJFR2UWGVT3ECYGJLRRPNDHANCNFSM4KQ3R3SQ .
The version of python should be 2.7 …
I have used python 2.7 (a long time for config env) but it still like this oh on
Hello. I am trying to implement your code and faced weird behavior of Attention Maps (predMaskC and predMaskN). I am using your pretrained model. I also tried torchvision 0.2.0 and torch 0.4.0, but it didn't help. Do you have any suggestions?