NVlabs / DREAM

DREAM: Deep Robot-to-Camera Extrinsics for Articulated Manipulators (ICRA 2020)
Other
145 stars 31 forks source link

How to annotate keypoints in NDDS in the format specified in DREAM? #27

Open Shashank-Prakash9 opened 1 year ago

Shashank-Prakash9 commented 1 year ago

I have installed NDDS but there seems to be no way of marking the keypoints. Any help on this matter will be greatly appreciated

tabula-rosa commented 1 year ago

Hi Shashank, thank you for your interest in DREAM!

For generating the DREAM synthetic datasets, we used the NDDS plug-in for Unreal Engine to export the keypoint information. Unfortunately, I no longer have access to this simulator, and it was not open-sourced for this project, so I am afraid that I can't provide support specifically for NDDS.

I would suggest posting your issue to the NDDS repository (https://github.com/NVIDIA/Dataset_Synthesizer) as they may be better equipped to provide support.

If there are questions about DREAM-specific usage, I might be able to help, but for NDDS more generally, I'm sorry I can't provide further support!

TontonTremblay commented 1 year ago

Yeah we could not make the NDDS + robots public. I would suggest to use nvisii + pybullet. I used it to generate watch it move data. I could probably push a script in the next couple weeks that could generate the right data, but I cannot guarantee it.

Shashank-Prakash9 commented 1 year ago

Thank you so much Tabitha and Jonathan. I am thankful for your responses. Jonathan i will definitely explore nvisii and pybullet for the data generation process.My only further query would be , Was the keypoint annotation a feature your team added to NDDS ? or was it already available since I already have a robot model loaded but haven't been able to annotate keypoints.

TontonTremblay commented 1 year ago

It was an added feature for NDDS. Back then we had experts in UE4 helping us building the features we needed. I am afraid that my skills are quite limited to add such feature. But the idea is that you can label a specific point on an asset you want exported in the image space. You easily do that with nvisii and script I wrote.

https://github.com/NVlabs/Deep_Object_Pose/tree/master/scripts/nvisii_data_gen This will get you to about 80% there.

Things to update:

  1. load the robot instead of normal objects: https://github.com/NVlabs/Deep_Object_Pose/blob/master/scripts/nvisii_data_gen/single_video_pybullet.py#L410
  2. Update the robot pose + nvisii object state: https://github.com/owl-project/NVISII/blob/master/examples/24.urdf.py this is the skeleton on how you can achieve this.
  3. Export the joint position keypoints in image space: https://github.com/NVlabs/Deep_Object_Pose/blob/43b685062e79caae921438a220a133895931261c/scripts/nvisii_data_gen/utils.py#L998 This does it for the cuboid around an object. But the way the cuboid is created could simply be hacked to add a keypooint at 0,0,0 of the joint in its local frame than export that single child.

These steps should be somewhat easy to hack. Sorry for vague directions, this assumes that you have quite a bit of 3d knowledge and I am so sorry if it is not the case, this might be quite a bit for someone to debug alone. So if you decide to start in that direction, I would be happy to answer any questions.