Open ArghyaChatterjee opened 1 year ago
I am so sorry, I do not have access to the data anymore. We were not really precise with our data back then.
On Thu, Apr 6, 2023 at 9:55 AM Arghya Chatterjee @.***> wrote:
Hi,
I am trying to generate or replicate the actual result showed in the paper. So, for that, I think you have generated dataset with different background both photorealistic and non-photorealistic. Can I have access to the training data that you used ?
Thanks, Arghya
— Reply to this email directly, view it on GitHub https://github.com/NVlabs/Deep_Object_Pose/issues/293, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABK6JIA2YEML432T4Y5BHFTW73YPHANCNFSM6AAAAAAWVUY66Q . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Ah, ok. Thanks for letting me know. Also, how did you split the photorealistic (from UE4) and non-photorealistic (NViSII generated) dataset during training ? Say in 100k dataset for a single object, how many were photorealistic and how many were non-photorealistic ?
Part of the dataset is available online it is the FAT dataset.
From what I remember it was 60k from FAT (selected randomly) and 60k from domain randomization all rendered with UE4.
For the HOPE object I did 60k from NViSII script from this repo and not FAT dataset for that one.
On Thu, Apr 6, 2023 at 4:19 PM Arghya Chatterjee @.***> wrote:
Ah, ok. Thanks for letting me know. Also, how did you split the photorealistic (from UE4) and non-photorealistic (NViSII generated) dataset during training ? Say in 100k dataset for a single object, how many were photorealistic and how many were non-photorealistic ?
— Reply to this email directly, view it on GitHub https://github.com/NVlabs/Deep_Object_Pose/issues/293#issuecomment-1499741903, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABK6JIBQ67VNRFOCCHX6M5TW75FPLANCNFSM6AAAAAAWVUY66Q . You are receiving this because you commented.Message ID: @.***>
Hey @TontonTremblay And for the YCB objects? For mustard I'm using 75.000 blenderproc images with mean results. Should a combination with FAT give better results?
What is the difference between left and right version of iamges? The look exactly the same for me:
Thanks, Joan
For the YCB it was 60k from FAT (selected randomly) and 60k from domain randomization all rendered with UE4 (NDDS).
The FAT was also built for stereo cameras (2 rgbs) they are placed at 8 cms from each other with parallel optical rays. You can ignore the right or left or mix them.
Hi,
I am trying to generate or replicate the actual result showed in the paper. So, for that, I think you have generated dataset with different background both photorealistic and non-photorealistic. Can I have access to the training data that you used ?
Thanks, Arghya