Closed 15ajka closed 2 years ago
Welcome to the world of pose estimation of symmetrical objects :P You can see that the training works, it just that any corner looks like a corner to the neural network. They do not have ids. @mintar has uploaded a script somewhere to deal with symmetrical objects. You should look around his answers in the issues to find it. Otherwise what you did is correct.
In the nvisii rendering script (if you are using that one) you can play around with the boundaries of the cone used to place the objects in the view. You could play with the scale of the object. https://github.com/NVlabs/Deep_Object_Pose/blob/master/scripts/nvisii_data_gen/single_video_pybullet.py#L427-L472
Hello, I am trying to train this network to detect simple colored cubes for my school project. I used generate script to generate 200 images with big number of red cubes with this model created online on Vectary, so it should be in mm units. Dataset link dataset200. I trained on this dataset for 100epochs and got pretty promising belief maps results already after 20 epochs.
When I ran the inference script on data from training dataset (folder 000) with this command
python3 inference.py --showbelief --data ./000/ --outf ./inference_200_out
with no change in camera config and this config_pose however, i got almost no Cubes detected. Here is the inference script results (e.g. 1 cube detected in image 5, no cubes in first 4 images). When I changed sigma to 0 in config_pose.yaml, i got some cubes found, but not correcttly low_sigma_results. I guess there might be more possible problems which I thought of:To dismiss first two points, I wanted to train on single image with 1 cube only (also tested 2,4 cubes). However, when i started the training, it took really long to train (20 epochs took around 4 minutes, so it means training for 2000 epochs would take 6+ hour) and there were no results in belief maps even after 100 epochs. Is this normal to train for this long on a single image and do I have to wait for this long to see any results ? I am running this on Google Cloud instance with Nvidia Tesla T4 GPU.
Could you please provide me some tips in which way should I progress my training to get some results ? Or do you see any obvious bugs ? I thought cube should be a pretty easy object to detect when using generated data only (also for testing currently).
I am also facing the problem, that when there are too little cubes in generation process they are often out of the frame or in the borders of the frame. Is there a way, to make them more centered, so that there are more whole cubes in the frame ?
Thank you very much,
Angelika