Open monajalal opened 1 year ago
The SPAM is quite hard to train. I have had training sessions that did not produce anything. Are the weights we provided not working for you?
Thanks for your response. My goal is to do an end-to-end training for meat object, making sure everything works, and then move on to my single object of interest.
I chose this data since I wanted to be able to visualize the inference results.
When I chose the Shiny Meat pth trained model, I was not able to visualize it using nvdu pose visualizer since that data is created with NViSii not NDDS.
Would you please be able to share or open-source your pose visualizer code for the inference results on Shiny Meat or any other NViSii format dataset?
Thanks for your response. My goal is to do an end-to-end training for meat object, making sure everything works, and then move on to my single object of interest.
I would try an other less hard object, this one is not easy. I think I shared nvisii data.
Would you please be able to share or open-source your pose visualizer code for the inference results on Shiny Meat or any other NViSii format dataset?
I believe @mintar already mentioned how to do this, right now I do not have plans to release updates on nvdu nor the data outputed by nvisii code right now. It should be pretty easy to do though since all the information you need to crearte _camera_settings and _object_settings are in the json files nvisii script outputs.
just going back to this, is there any way to reproduce some of the results of your original DOPE paper? for example, is trying the Cheez-It cracker box for object training using scripts/train.py would work? @TontonTremblay
Using the weights you should be able to reproduce the numbers, but I am sorry I do not have the training data anymore.
Out of curiosity, didn't you use the entire DOPE dataset for this? If I am training using the entire FAT dataset for cracker box, do you expect to receive any meaningful result?
yeah it should work fine. The paper shares some results with using FAT. I don't remember if it is all FAT or FAT where the cracker is in the scene.
My intention is to show a demo of full training from scratch and then getting inference for single meat object. I trained the single meat from the scratch using dope/script/train.py (with the modification to account for distributed data parallel) (code below).
I used the single meat (6000 images) from FAT dataset from your DOPE paper. I used one of the training images (you would expect overfitting and good results since it already has seen it) however the heatmap is very bad.
I used both the original camera_info.yaml and config_pose in
~/research/dope/scripts/train2/config_inference
as well as the one I modified.Could you please let me know how I can use your dataset or pth for getting an inference on meat object that would work with NDDS format or please provide the visualization code for HOPE dataset that has the NViSii format for meat?
Here's the command I used for DDP training:
training data is:
Here's the config pose for inference:
Here's the camera info used for inference:
Here's the original
train.py
in dope/scripts to account for DDP ( I am not 100% sure about its correctness):python3 train.py --data path/to/FAT --object soup --outf soup --gpuids 0 1 2 3 4 5 6 7
left belief map:
right belief map:
original left image:
original right image:
Thanks a lot for any input.