Closed danqu130 closed 2 years ago
Yes, it is possible to do that, but with EVIMO2 there is a better way. The code snippet linked makes the assumption that the pixels were collected from a "far away" viewpoint. With EVIMO2, better results can be achieved by using the per-pixel depth maps in the event camera frame.
The most straightforward way would be something like this:
Much of this is implemented in evimo_flow.py. If you adapt this script to render RGB information into the event camera frame we would be happy to accept a PR!
Because the warped points are not guaranteed to fall inside the RGB image there will be pixels in event camera frame without RGB information.
Thanks for providing a workable solution, I will try it out. I will share it if it is completed.
Thanks for the opensourced new dataset first and foremost! The new dataset provides higher resolution image and event data, but it was collected with two independent cameras (Samsung and flea), which leads to the misalignment of image and event data. My question is, is it possible to approximate alignment through calibrated matrices similar to DSEC (ref https://github.com/uzh-rpg/DSEC/issues/25#issuecomment-956491275)? This will be of great help in the application of combining two kinds of data.