Open monajalal opened 1 year ago
To answer the first part of your question: no, it does not matter how you got your .pth file (train or train2, NDDS or NViSII). All .pth files are compatible with everything.
-- Sent from my phone. Short. Typos.
Am 09.12.2022 22:43 schrieb Mona Jalal @.***>:
Do you have an inference.py also for dope/scripts/train.py where we train an item for NDDS-generated dataset?
I do see that you have an inference.py for dope/scripts/train2/inference.py which is for NVISII.
When I initially fed my .pth trained weights to it, gave me error and it could be it is designed for nvisii data?
It is rather weird though since using the inference.py inside train2/inference.py I see the created belief maps which makes quite a lot of sense however, it doesn't produce the json files for the test images.
len(all_peaks[-1]): 2 nb_object: 0 all_peaks[-1][nb_object][2]: 0.00017739832 nb_object: 1 all_peaks[-1][nb_object][2]: 0.017452825
config_inference/config_pose.yaml 117:thresh_points: 0.1
inference/detector.py 666: if all_peaks[-1][nb_object][2] > config.thresh_points: 694: if candidate[2] < config.thresh_points:
Basically, here, all_peaks[-1][nb_object][2] is smaller than 0.1 which is thresh_points.
So to sum up,
— Reply to this email directly, view it on GitHubhttps://github.com/NVlabs/Deep_Object_Pose/issues/277, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AACOFPGFGA72R77AVNMNAMLWMORWFANCNFSM6AAAAAASZY5JCI. You are receiving this because you are subscribed to this thread.Message ID: @.***>
Do you have an inference.py also for dope/scripts/train.py where we train an item for NDDS-generated dataset?
I do see that you have an inference.py for dope/scripts/train2/inference.py which is for NVISII.
When I initially fed my .pth trained weights to it, gave me error and it could be it is designed for nvisii data?
It is rather weird though since using the inference.py inside train2/inference.py I see the created belief maps which makes quite a lot of sense however, it doesn't produce the json files for the test images.
Basically, here,
all_peaks[-1][nb_object][2]
is smaller than 0.1 which is thresh_points.So to sum up,