Open senecobis opened 1 day ago
Hi @senecobis thank you for your interest and the questions,
1>
That's a legacy in my codebase and I was lazy to simplify it because having it twice does not harm.
2>
I could not understand your question, could you explain a bit more?
3>
I would use same, 1. If you check multi_focal_normalized_gradient_magnitude in detail, the values returned is around 4 (in case when you use forward_iwe, backward_iwe, and middle_iwe). And this is similar to multi_focal_normalized_image_variance. Since the numerical ranges are similar I think it's ok to use similar weight for multi_focal_normalized_image_variance.
Thank you for your reply @shiba24 2) Basically I don't understand this line
Btw do you have by chance either in this code-base or others also contrast maximization for rotation estimation? Instead of optical flow
do you have by chance either in this code-base or others also contrast maximization for rotation estimation? Instead of optical flow
Yes, I do have one, though not included in this public repo.
self.motion_to_dense_flow(pyramidal_motion, t_scale) * t_scale
Please ignore t_scale
. It's time scale, and why I have it here is a bit more technical reason.
For the tile-based flow estimation, the flow estimation is 2*N_tile
, not 2*N_pixel
.
Raw event coordinates are (x, y) in the full pixel resolution (of course), so I need to interpolate (upscale) the 2*N_tile
flow to dense (2*N_pixel
) flow. That's why I have interpolate_dense_flow_from_patch_tensor
and interpolate_dense_flow_from_patch_numpy
.
Does it answer your q?
Hi @shiba24 thanks for the great work. I have some questions over the implementation of the contrast maximization for optical flow. Inside
self.objective_scipy(x, events, coarser_motion),
yout_scale = events[:, 2].max() - events[:, 2].min()
dense_flow = self.motion_to_dense_flow(pyramidal_motion, t_scale) * t_scale
loss = self.calculate_cost(
1) I don't understand why you normalize twice, one time in the main
batch[..., 2] -= np.min(batch[..., 2])
and then inside the optimization 2) I don't understand why you upscale the optical flow from the motion parameters to obtaindense_flow
. Why you use it as the warp for actual cost ? Can't you use the warp of the current event stack, given the current motion parameters implemented in wap.py insideclass Warp(object):
3) You implemented multiple losses but only used the gradient based, which make sense for optical flow. Here you used a weight for the loss of 1. Which value would you use for multi_focal_normalized_image_variance for the same estimation problem ?