facebookresearch / OrienterNet

Source Code for Paper "OrienterNet Visual Localization in 2D Public Maps with Neural Matching"
Other
463 stars 48 forks source link

How to calculate depth map alpha for pv image? #23

Closed Zhenghao97 closed 1 year ago

Zhenghao97 commented 1 year ago

Hello, good job for your orienternet, it is very awesome!

I meet a problem that how to calculate depth map for pv image. The fig.7 in your paper show up depth planes alpha, it looks like depth also learned well by pose supervise. I wanna also re-visualize the depth maps for pv image. However in your repo, i can not find related code.

So I guess the sample_depth_scores func code output the related info, right? image

Can you tell me the detailed depth calculate method? I would be appreciate for it!

sarlinpe commented 1 year ago

From visualize_predictions_mgl.ipynb:

scales_scores = pred['pixel_scales']
log_prob = torch.nn.functional.log_softmax(scales_scores, dim=-1)
scales_exp = torch.sum(log_prob.exp() * torch.arange(scales_scores.shape[-1]), -1)
total_score = torch.logsumexp(scales_scores, -1)
max_score = log_prob.max(-1).values.exp()
plot_images([scales_exp, max_score, total_score], cmaps='jet')

log_prob is the log-probability volume at scales uniformly distributed within the range defined by model.conf.scale_range. As explained in the paper, the actual depth values are obtained as:

scale_min, scale_max = model.conf.scale_range
scales = 2**torch.linspace(scale_min, scale_max, model.conf.num_scale_bins)
depths = camera.f / scales

scales_exp is the expected pixel-wise scale while {max,total}_score are estimates of pixel-wise uncertainties.

Zhenghao97 commented 1 year ago

Ohh,i miss the code here cause continue skip this part.

Thanks very much!