Open wzpscott opened 1 year ago
Hello, thank you for your compliment and interest to our work.
I didn't include a specific optical flow estimation method into the code because I want to give the user the flexibility to run the algorithm with optical flow from any available sources. There are a couple of ways to generate optical flow: 1.https://github.com/TimoStoff/event_cnn_minimal 2.https://github.com/tub-rip/event_based_optical_flow 3.https://github.com/uzh-rpg/E-RAFT But of course you are free to choose other methods.
This work was only evaluated on rotation sequences because in these scenarios I could generate high quality optical flow which closes to ground truth. In rotation sequences, one can estimate high accurate angular velocity by CMax or obtain it from IMU data(IJRR dataset provides it). The optical flow can be computed from the angular velocity(See equation 7 from this paper). Please note that the equation can be simplified since in the case of rotations the linear velocity is 0 and it is unnecessary to know the depth.
For you last question, the algorithm can adapt the complex camera motions. One just needs to find a way to compute optical flow. The If the camera motion is complex, for example 6dof, one can still estimate the flow by deep learning methods. But the performance of the algorithm will depend on the quality of the optical flow the deep learning methods provide. You can find an example in the paper.
Please don't hesitate to ask if you have further questions.
Thanks for your detailed reply. May I ask which specific method do you use to generate optical flow to derive the results in the paper (Tab.1 and Tab.3)? Do you use cmax-based methods or get it from IMU data? I understand generating optical flows falls out of the scope of this paper, however I think it would be very helpful to include the code or instructions of it, since different methods do effect the quality of generated optical flow and 'the performance of the algorithm will depend on the quality of the optical flow' as you stated.
Hello, the results in Table 1 and Table 3 are obtained by using the optical flow estimated by CMax. And they will close to the results by using IMU as supported by this paper
To satisfy your demand, I will create another branch and show how to compute optical flow from angular velocity. But I would stick to get the angular velocity from IMU since IJRR dataset provides it. If one insists to try CMax(for example this one), he/she can integrate the code by his/her own. The instructions to estimate optical flow by deep learning methods are included in their own repositories. I think I have nothing to write. One just needs to save the events and optical flow when the deep learning model makes inference.
Integrating the code of optical flow estimation can take up to two weeks.
Thanks a lot for you reply.
Update: Hi, I have tried to estimate optical flows using the pretrained EvFlowNet as in ssl-e2vid. However the results are not very satifactory. Here are some results I reproduced on the Dynamic Rotation sequence. I use default configs of regularizers in main.py. Denoiser: L1: L2: And here are my questions:
Thanks in advance.
Hello, I uploaded the code to compute optical flow from the angular velocity and wrote some instructions in the READ.ME. For your questions:
Hi, thanks for your outstanding work. I'd like to try your method on my own event dataset without ground-truth optical flows. Can you release the code for generating optical flows? I also notice this work is evaluated on '_rotation' sequences on IJRR dataset while other works (e2vid, ecnn...) are evaluated on '_6dof' sequences. I wonder how well can this work adapt to complex camera motions.