Open patsyuk03 opened 6 months ago
wow this is such a beautiful example of training on a symmetrical object. You can fix this by retraining using the script to annotate the symmetries.
Thank you for the quick answer.
As I understand, it is similar to your example of a Hex screw object with rotational symmetry. However, my object, due to the hexagon in the center, is not entirely symmetrical rotationally, but I can see the centerline where it can be mirrored.
What would be the right way to define the symmetry in this case? Will it be possible for a model to distinguish between such small offsets of hexagon corners, or is the only option to just ignore it and define it as symmetrical rotationally?
There is an axe for each hexagon corner.
Hello. I have generated data using this command:
python single_video_pybullet.py --nb_frames 10000 --scale 0.001 --path_single_obj ~/Deep_Object_Pose/scripts/nvisii_data_gen/models/Gear/google_16k/gear.obj --nb_distractors 0 --nb_object 10 --outf gear1/
And trained the model for 60 epoch on 9800 generated images:
python -m torch.distributed.launch --nproc_per_node=1 train.py --network dope --epochs 60 --batchsize 2 --outf tmp_gear1/ --data ../nvisii_data_gen/output/gear1/
When I run inference on the rest 200 generated images. The belief maps seems good, but there are no objects detected.
Here is the inference config:
Is there something that I can do to fix this?