Closed mahmoudEltaher closed 3 years ago
Hi @mahmoudEltaher,
Has the training converged? What are values of the point-to-primitve and the the primitive-to-point loss? Are you overfitting on a single sample or are you training on the entire chair category? Do you mind sharing the input too as well as your generated mesh? From the attached image is a bit hard to figure out what is the problem?
Best, Despoina
Hi Thanks I train on the entire chair category from the shapeNet database ,from the following link (shapenet.cs.stanford.edu/shapenet/obj-zip/03001627.zip) What you mean by generated mesh ? Kindly find the value for the loss.
Hi @mahmoudEltaher,
From the train statistics that you shared it seems that the pcl-to-primitive loss is higher than the primitive-to-pcl loss. Therefore, you should set the pointcloud-to-primitive loss weight to 0.8 and the primitive-to-pointcloud loss weight to 1.1. We have observed that for some object classes weighing these two terms unequally improves performance.
Furthermore, did you try testing the code without the --train_with_bernoulli
argument? Does this work? If you omit the --train_with_bernoulli
argument, you should be able to see similar results as the one in our paper, only with overlapping primitives. That would be the first thing to try to make sure that the code is running properly.
If this works, next, you can try enabling the --train with bernoulli
argument together with the sparsity regularizer (https://github.com/paschalidoud/superquadric_parsing/blob/19e365f012fb34c5997d2d5c28a5c121228d8063/scripts/arguments.py#L138) and you should see results similar to the ones reported in the paper (just with some overlapping primitives).
Finally, finetuning a pre-trained model with --probs_only
should give you more parsimonious representations, as the ones reported in the paper.
When you run the forward_pass.py
script, you can also set the --save_prediction_as_mesh
to save the predicted mesh as a .ply file. You can send me the output file to visualize the predicted mesh, so that I can better figure out what is the issue. From your screenshot it is a bit tricky for me to guess where the issue is coming from :-)
Best, Despoina
Kindly find the result for training : train the model by Running a command : python scripts/train_network.py ../../../ShapeNetCore.v2/03001627/ tmp/ --use_deformations --use_sq --lr 1e-4 --n_primitives 20 --architecture tulsiani --dataset_type shapenet_v2 --use_chamfer --run_on_gpu --batch_size 16
then I test the model python scripts/forward_pass.py demo/03001627/ /tmp/ --model_tag "1a6f615e8b1b5ae4dbbc9440457e303e" --n_primitives 20 --weight_file config/model_149 --use_sq --dataset_type shapenet_v2 --use_deformations --save_prediction_as_mesh
I change the weights "pcl_to_prim_weight": args.get("pcl_to_prim_loss_weight", 0.8), "prim_to_pcl_weight": args.get("prim_to_pcl_loss_weight", 1.1),
I train for 150 epochs.
Kindly find the result for training : (I enable --train_with_bernoulli ,--regularizer_type sparsity_regularizer ) result_with_deformation_with_bernoulli_change_weights.zip
train the model by Running a command : python scripts/train_network.py ../../../ShapeNetCore.v2/03001627/ tmp/ --use_deformations --regularizer_type sparsity_regularizer --use_sq --lr 1e-4 --n_primitives 20 --architecture tulsiani --train_with_bernoulli --dataset_type shapenet_v2 --use_chamfer --run_on_gpu --batch_size 16
then I test the model python scripts/forward_pass.py demo/03001627/ tmp/ --model_tag "1a6f615e8b1b5ae4dbbc9440457e303e" --n_primitives 20 --weight_file config/model_149 --train_with_bernoulli --use_sq --dataset_type shapenet_v2 --use_deformations
I change the weights "pcl_to_prim_weight": args.get("pcl_to_prim_loss_weight", 0.8), "prim_to_pcl_weight": args.get("prim_to_pcl_loss_weight", 1.1),
I train for 150 epochs.
hi @paschalidoud
This is kindly reminder, I wonder why i can not get the same result as paper,however i follow all the steps.
Hi again,
I checked the results that you posted on the 29th of March and I think they are not very far away from the ones reported in the paper. In general since the method is unsupervised, there exists a variance in the predicted primitives. Moreover, due to the Chamfer loss the network can end up in local minima, which is probably the case also for your experiment. This is a common issue with Chamfer loss and the main reason why in follow-up works we train our model with an occupancy loss. If you retrain the network a couple of times, do you consistently get such abstractions? Moreover, you are only sharing with me a single object reconstruction. How does it perform on the other test samples? If you run the forward pass on a model from the training set, how does it look?
I am always happy to answer concrete questions, however it is not possible to run the code on your behalf. So please spend slightly more time when you open an issue and ask a question because I obviously cannot help you if you don't give me enough details.
Best, Despoina
Hi @paschalidoud
Now , I can get a good result when I run the model without train_with_bernoulli But once I train_with_bernoulli, the result is not good and I notice also that the spar sirt regulaizer is always zero , so it is not have effect on the training, what is the effected of this regulizer on the training. I wonder if I miss something when I apply the model with train_with_bernoulli.
waiting for you kindly advice.
I try to train the model by Running a command : python scripts/train_network.py ../../../03001627/ tmp/ --use_deformations --use_sq --lr 1e-4 --n_primitives 20 --architecture tulsiani --train_with_bernoulli --dataset_type shapenet_v2 --use_chamfer --run_on_gpu --batch_size 16
then I test the model python scripts/forward_pass.py demo/03001627/ /tmp/ --model_tag "1de49c5853d04e863c8d0fdfb1cc2535" --n_primitives 20 --weight_file config/model_149 --train_with_bernoulli --use_deformations --use_sq --dataset_type shapenet_v2
The result is : 0 0.9999999 1 0.9999993 2 0.9999989 3 0.99999845 4 4.6113662e-08 5 0.9999995 6 4.2335728e-08 7 4.6233605e-08 8 4.961429e-08 9 0.9999982 10 0.99999964 11 4.5955698e-08 12 4.1351942e-08 13 4.348484e-08 14 4.1547935e-08 15 4.900625e-08 16 0.9999994 17 0.9999989 Using 9 primitives out of 18
and the image Result as following ?