DLR-RM / AugmentedAutoencoder

Official Code: Implicit 3D Orientation Learning for 6D Object Detection from RGB Images
MIT License
338 stars 97 forks source link

test on a custom model is bad #34

Closed flowtcw closed 5 years ago

flowtcw commented 5 years ago

I think I have got right images. sample batch_screenshot_23 06 2019

when I test the model, the evaluation is very bad. python aae_image.py exp_group/my_autoencoder -f /home/leviathan/PycharmProjects/test_img/size743/

resized img_screenshot_23 06 2019 pred_view_screenshot_23 06 2019 1 2

The original cad model don't have colour vertex, but when I edit the 'rescont' to 'cad', I got Segmentation Fault (core dumped), just like #10 and #9 ,so I add the colour vertex as 255.the code can run right.

this is my ply file, no colour version and colour version ply.zip

I don't know how to solve it ,please help me. tank you very much!

KPomagier commented 5 years ago

@flowtcw I am trying to set up my environment for this model, but i have got problem with THCudaCheck etc. Could you please help me telling how u successfully set up this model? If you have a docker file it will be even better :)

flowtcw commented 5 years ago

@flowtcw I am trying to set up my environment for this model, but i have got problem with THCudaCheck etc. Could you please help me telling how u successfully set up this model? If you have a docker file it will be even better :)

My cuda version is 9.0, you should check it, I found that if you use cuda10, the code may has some bug. You can tell me what others bugs you meet, I will try my best to help you.

KPomagier commented 5 years ago

Thanks for information with cuda. I will change my graphic card for that which support cuda 9. Right now the biggest issue for me is that i tried train model on docker, but got the same problem like here https://github.com/DLR-RM/AugmentedAutoencoder/issues/35

Traceback (most recent call last): | 0 / 20000 ETA: --:--:-- File "/usr/local/bin/ae_train", line 11, in <module> sys.exit(main()) File "/usr/local/lib/python2.7/dist-packages/auto_pose/ae/ae_train.py", line 90, in main dataset.get_training_images(dataset_path, args) File "/usr/local/lib/python2.7/dist-packages/auto_pose/ae/dataset.py", line 93, in get_training_images self.render_training_images() File "/usr/local/lib/python2.7/dist-packages/auto_pose/ae/dataset.py", line 245, in render_training_images bgr_x, depth_x = self.renderer.render( File "/usr/local/lib/python2.7/dist-packages/auto_pose/ae/utils.py", line 15, in decorator setattr(self, attribute, function(self)) File "/usr/local/lib/python2.7/dist-packages/auto_pose/ae/dataset.py", line 75, in renderer float(self._kw['vertex_scale']) File "/usr/local/lib/python2.7/dist-packages/auto_pose/meshrenderer/meshrenderer_phong.py", line 18, in __init__ self._context = gu.OffscreenContext() File "/usr/local/lib/python2.7/dist-packages/auto_pose/meshrenderer/gl_utils/glfw_offscreen_context.py", line 12, in __init__ assert glfw.Init(), 'Glfw Init failed!' AssertionError: Glfw Init failed

Do you know if it's possible to train that model on docker and how to deal with that opengl issue? Docker stand on other computer (server) and i am connecting with it via terminal.

MartinSmeyer commented 5 years ago

I have not tried it in docker. Concerning your model predictions, you will need to insert images that encompass the object much closer or you need to train a detector. Insert an image that is similar to the predicted renderings. The top part also looks quite different without texture, so assigning vertex colors would help.

MartinSmeyer commented 5 years ago

Update @KPomagier: Whether or not headless rendering works depends on the OpenGL Context. The previously used GLFW does still not support headless rendering. EGL does but is not running out-of-the-box. However @wangg12 pointed out that with a small change to PyOpenGL we can make EGL contexts work. The code is now updated and you can train without a display connected using the EGL context. It might also make it possible to run in a docker image.

MartinSmeyer commented 5 years ago

Before running ae_train, do:

export PYOPENGL_PLATFORM='egl'