Closed goutamyg closed 4 weeks ago
Hi, Thank for your interest in our framework.
For questions 1 and 3, you can change all what you need from "SettingsActor" block (sorry the name was reversed in the readme and I have changed it now). You can either find it in the "World Outliner" window or there is a rectangular block in the scene called "Settings" that you simply need to click. In the "Details" window of that block, you will have to expand everything in the Default section to see all implemented gestures and camera types. I have attached a screenshot to the Github readme now. You could also simple duplicate the components in the Gesture List to make it do the gestures in sequence instead of playing them one by one.
As for creating your own gesture (Question 2), this would require familiarity with Unreal Engine a bit. Basically there a class Blueprint named "BasicSplineGesture" where all the implemented Gestures inherit from it. So, you would need to create your own class Blueprint that would inherit from the Basic one, but with different variations in the hand path, also named spline (for example, check "RotateGesture" Blueprint).
Please let me know how did it go with you and if you need further information.
Hi, thank you for your quick response! Now I am able to change the gestures under SettingsActor. I have a few more questions if you don't mind.
How the depth camera's output is generated/simulated in the captured depth data? Is it relative or absolute? I found all the camera variants under Contents/CameraTypes/
as assets files. I was wondering if we can generate the depth maps mimicking a time-of-flight sensor (e.g., Kinect) to generate absolute depth (in cm or meters).
Can you elaborate on installing the open-source plugin UEVideoRecorder
with the required prerequisites and adding to the Plugin Folder? I suppose, with the plug-in installed, I can save the videos by enabling the Recording
option and defining the appropriate Video File Path
as shown below.
How can I control the sequence of videos when I hit the Play
button? I am guessing it is controlled by the Mannequin_ControlRig_Take1
, but I could be wrong as I am new to Unreal Engine.
Also, is it possible to save rgb, depth, and IR videos into separate folders using the UEVideoRecorder
plugin?
I appreciate you help.
Hi, for the depth camera (Question 1), it is simulated by utilizing a flip-book animation of a grayscale noise texture. The noise in the depth data is modeled by distance as in "Chris Sweeney, Greg Izatt, and Russ Tedrake. 2019. A supervised approach to predicting noise in depth images. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 796–802." You can check the details in the "Camera Selection" paragraph of the Method section of the paper. The current configuration already mimics a time-of-flight sensor, however, you need to calibrate the virtual environment coordinate system to a real environment coordinate or to the scale of your automotive environment.
As for saving recordings (Question 2 and 3), the UEVideoRecorder is an external plugin that is a bit outdated, you can follow this github repo "https://github.com/ash3D/UEVideoRecorder" for details about how to manipulate it. You can also download the version that do not utilize this plugin and utilize any screen recording software that will give the same result.
Hi,
Thank you for sharing the framework related to your paper. I have a few questions regarding its usage: