JUGGHM / PENet_ICRA2021

ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"
MIT License
327 stars 46 forks source link

Is that possible to access tensorboard during training ? #20

Open Laihu08 opened 3 years ago

Laihu08 commented 3 years ago

Hi, I am just curious to know whether it's possible to access tensrboard logs during training like others. Because it helps to see how convergence our loss function is! Thanks in advance

JUGGHM commented 3 years ago

Hi! Of course you could add a tensorboard writer to record variations during training. As for myself, I observe the error metrics on the validation set after each epoch to judge the convergence empirically in this project.

Laihu08 commented 3 years ago

Oh that's nice since I am new to this training, I don't know how to write a tensorboard for your training and I couldn't understand how to observe error metrics on the validation set after each epoch, for example, training data is 26k and validation set is 1000. I think the epoch will reach 100, how to check error metrics? I know you would be busy by this time and sorry to ask you many questions right now? please guide me through this!

JUGGHM commented 3 years ago

The results are recorded in the generated file 'val.csv'. Feel free to direct me if you have more questions.

Laihu08 commented 3 years ago

Hi, successfully training on selected data as per your suggestion, thank you very much. I have some doubts while training, after every epoch it showing a summary of the train round with MAE and RMSE, in that section how an error is calculating, I.e for calculating error we need ground truth while training what are MAE and RMSE? and why ?. For validation, I can understand i.e for every epoch, a checkpoint will save and with that checkpoint, it computes a depth map and compared with ground truth which is in Val_section_cropped with 1000 data.

2) where those results are saving, I will attach a picture for reference because in main.py I have set the path for submit_test but it's not saving in that! Can you point out whee it is saving? results

JUGGHM commented 3 years ago

Hi, (1) during the training procedure the error metrics are accumulated in an on-line scheme. For the definition of RMSE and MAE you might refer the screenshot below from NLSPN[Park et al, ECCV2020]. QQ截图20210627150714

(2) Only when the mode of the function iteration in main.py is "test_completion" will the 1000 figures for testing be saved.

Laihu08 commented 3 years ago

Thank you very much for the detailed explanation. I am curious about the final prediction as you can see there is some pixel doesn't have a depth value in ground truth but in prediction, it computes depth value for missing pixels. I will attach a picture for reference. what is the yellow color in-depth prediction mean? As you can notice the yellow and red colors are rendered in prediction where there is no depth value in-ground truth. Maybe this question little confusing but I hope you will understand because I am really confused about the error range for the color map. Because every color in the color map represents some value like 0 - 5m error indicates purple like that, If so please explain it question ! Thank you in advance, Your valuable answers are helping me in my research.

JUGGHM commented 3 years ago

Hi!(1)Note that the groundtruth maps is automatically generated by accumulating 11 LiDaR consequent frames with outlier removed in Sparse CNNs [Uhrig et al. 3DV2017]. So they are semi-dense and can be regarded as part of full-dense depth maps. Although not at all pixels supervision is imposed in one certain frame, but for the whole dataset at most pixels there exists supervision, either frequently or not. In addition, the color image is dense as well. So the model do be able to predict full-dense depth maps. (2) Yellow and red in this color mapping, I guess, might simply mean large values.

Laihu08 commented 3 years ago

Thank you very much JUGGHM, it was a clear explanation. But I am still wondering about colormap how it assigning different colors for a particular distance in the dense depth map, there is no error range for a color map or am I asking the wrong question about colormap? what colormap you used?

JUGGHM commented 3 years ago

I used jet here and the range were set to 0~100m.

Laihu08 commented 3 years ago

Thank you JUGGHM, I appreciate your patience and answering all my questions too.

Laihu08 commented 3 years ago

Hi, I am curious to ask this question "how to calculate the percentage of Lidar points ?" like In sparse depth map has only 6% of points and ground truth (dense depth map) has only 30%, how to calculate those percentages and how to read points in depth map like how many points are projected to RGB or how many points present in sparse depth map?

JUGGHM commented 3 years ago

Hello, you might use torch.where for generating a binary mask and then sum it up to count the number of valid pixels.

Laihu08 commented 3 years ago

Thanks for the quick reply, need to what is the mechanism behind calculating those percentages like "projected point number/cropped RGB resolution"?

JUGGHM commented 3 years ago

valid percentage = the number of valid pixels / the number of all pixels (i.e. h x w)

Laihu08 commented 3 years ago

Thank you, so, According to the above calculation, will generate a binary mask for sparse depth map and will consider valid pixel if it is 1 or else invalid pixel if it is 0, Then valid pixel divided by total number of pixels to calculate valid percentage, right ?

JUGGHM commented 3 years ago

Exactly!

Laihu08 commented 3 years ago

Thank you very much for the clear explanation.

Laihu08 commented 3 years ago

Hi, sorry to ask you more questions, can I change the Loss function from RMSE or MAE to Ruber Loss, since it is also depth estimation, can we use this loss function in your work, If son can you explain how to use it! Thank you very much.

JUGGHM commented 3 years ago

Hi! I think you must be referring to Huber loss. L1 and L2 loss are provided in criteria.py, and you could use torch.where to conditionally combine them into Huber loss via Huber loss's formular.

Laihu08 commented 3 years ago

Hi, actually I am talking about Ruber loss which is the square root of Huber Loss (Ruber), May be you can refer to this link "https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8803059" or you can search this article "Robust Learning for Deep Monocular Depth Estimation", if you find this loss, please help me to implement in your code, Thanks in advance.

JUGGHM commented 3 years ago

Hi, here is a naive version and you could use it like L1 or L2 losses:

class MaskedRuberLoss(nn.Module):
    def __init__(self, c=1):
        super(MaskedRuberLoss, self).__init__()
        self.c = c

    def forward(self, pred, target, weight=None):
        assert pred.dim() == target.dim(), "inconsistent dimensions"
        valid_mask = (target > 0).detach()
        e = (target - pred).abs()
        diff1 = e*e
        diff2 = 2*self.c*e - self.c*self.c
        diff = torch.sqrt(torch.where(diff<=self.c, diff1, diff2))
        diff = diff[valid_mask]
        self.loss = diff.mean()
        return self.loss

I am not sure whether it is right or whether it will work. If error occurs you could report directly.

Laihu08 commented 3 years ago

Thank you, sure I will let you know if there is any problem.

Laihu08 commented 3 years ago

Hi, I would like to ask this basic thing, should I replace this code in the criteria.py or should I add these lines to the creteria.py, if need to add then how can call Ruber loss while training, and you mentioned c =1, what is C here? Thanks in advance!

JUGGHM commented 3 years ago

You could add these lines to the criteria.py and create an instance of it in main.py. When initializing, you can define c as you like, and here I just set the default value to 1.

Laihu08 commented 3 years ago

Yeah, I got it, In main.py (https://github.com/JUGGHM/PENet_ICRA2021/blob/ee4318aaa82f72aa39fa97770196b167722e9515/main.py#L166) Instead of makedMSELoss need to define MaskedRuberLoss, right? facing some error while training. Screenshot from 2021-07-13 03-29-37

Laihu08 commented 3 years ago

Hi, is that possible to render different color for invalid pixels in sparse depth map or dense depth map? maybe we can assign higher depth values for an infinite depth in-depth map with a different color? As we can see in sparse depth map or in Kitti ground truth, it rendered in the same color for near(less) depth value and infinite depth value.

JUGGHM commented 3 years ago

Hi! You could replace the original code in the corresponding function in utils.py by the following ones:

def depth_colorize(depth, min=1e-3, max=100):
    depth = (depth - min) / (max - min)
    img = 255 * cmap(depth)[:, :, :3]  # H, W, C
    img_c1 = img[:, :, 0]
    img_c2 = img[:, :, 1]
    img_c3 = img[:, :, 2]
    img_c1[depth < min] = 255
    img_c2[depth < min] = 255
    img_c3[depth < min] = 255
    return img.astype('uint8')

This marks zero(invalid) values with the white color. You could utilize the same philosophy to mark very large values.

Laihu08 commented 3 years ago

Thank you very much, this is what I exactly expected. But If I want to run this code for a specific sparse depth map means how can I do it? Like input is sparse depth map and output is custom colorized depth map just for visualize from .png format.

JUGGHM commented 3 years ago

I strongly suggest the diverse coloring mapping recipes here and you could manually set the exception values (extreme large or small). I think setting each interval to a corresponding color manually is strenuous.

Laihu08 commented 3 years ago

Ok sure, I understand I think the JET colormap is enough to point out the depth values, and it's the interval of 0.1m to 90m. But I want to use this colorization to read specific depth maps from the dataset, do you have code for that ?

JUGGHM commented 3 years ago

You could record the rank of alphabetical order of the corresponding files and set a "if" condition in the iteration function in main.py for executing the iteration only when the figure is the ones you want to read.

Laihu08 commented 3 years ago

Thank you very much @JUGGHM, definitely, I will look into that. I really want to the way to do this process for mentioned issue .https://github.com/JUGGHM/PENet_ICRA2021/issues/22#issue-946045661 hope you can help me out !