Via Unreal Engine, load the mesh of the object that you want to study, run and the camera will navigate around the object providing a sequence containing its different views, which will then be analyzed to find the feature correspondances.
To get sequence with real-life camera check out this documentation to see how to pass data from the camera to the computer
Loading the sequence
Load the camera information ("_camera_settings")
Load rgb frames ("00xxxx.png")
Load event frames ("00xxxx._ec.png")
Load depth frames ("00xxxx.depth.mm.16.png")
Read camera's pose and object's pose from the files "00xxxx.json"
Check out the code here
Check out Python openCV CV2, used to generate the video sequence starting from the different images
Output
On the left we can see the images in rgb frame, in the middle we have the event representation and on the right the depth.
Dataset generation
Via Unreal Engine, load the mesh of the object that you want to study, run and the camera will navigate around the object providing a sequence containing its different views, which will then be analyzed to find the feature correspondances.
To get sequence with real-life camera check out this documentation to see how to pass data from the camera to the computer
Loading the sequence
Check out the code here Check out Python openCV CV2, used to generate the video sequence starting from the different images
Output
On the left we can see the images in rgb frame, in the middle we have the event representation and on the right the depth.