changhaonan / A3VLM

Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`
34 stars 1 forks source link

Code Big Error!!! #2

Closed 3202336152 closed 21 hours ago

3202336152 commented 3 days ago

The code you gave in render_robot_pyrender.py file may have some minor problems, but it can be executed normally. partnet_label.py also has many errors. First of all, the package from handal_label import farthest_point_sample does not exist in pypi at all, and you did not give a specific implementation, although the relevant code can be found online. Secondly, the data generated by render_robot_pyrender.py does not have the annotations_3d.json file, which means that this py file also has problems. I hope the code you gave can be more rigorous.

changhaonan commented 3 days ago

I fully understand it can be frustrating that after you spend a lot time but find the code not working. This repo is still in early open-source stage, which means there could exist some bugs and lack of documents. The problem of no annotations_3d.json is because I missed one step in the documents. I added it in the README. I just fully tested the all three steps in the generation pipeline on my own server. You may also perform a fast sanity checking using the parameter I mentioned in the README. This repo is being actively maintained, so feel free ask more questions if there exit.

3202336152 commented 1 day ago

Thanks for your reply. I corrected a few minor errors. I don't know if they are correct. You can refer to them.

  1. In line 601 of point_render.py file, export_ply=True, so that pointclouds and npy_8192 files will be written. Otherwise, the Generate labels operation will report an error that the pointclouds file is read as empty.
  2. Lines 492-495 of partnet_label.py file if "sd" in pcd_folder: depth_folder = pcd_folder.replace("pointclouds_sd", "real_depth_images") else: depth_folder = pcd_folder.replace("pointclouds", "real_depth_images") Only line 495 should be kept, and the other comments should be correct. Because the file name does not have _sd as the suffix.

When I complete the above tasks, an error will appear when executing the Generate labels step. I don't know if it is normal. 0%| | 2/2031 [01:28<24:22:37, 43.25s/it]ERROR:main:Error on 103971 processing failed with exception: 'str' object has no attribute 'get' 5%|▌ | 106/2031 [13:06<24:55:50, 46.62s/it]ERROR:main:Timeout: 13086 processing exceeded time limit. 5%|▌ | 107/2031 [14:06<26:44:01, 50.02s/it]ERROR:main:Timeout: 7619 processing exceeded time limit.

Looking forward to your reply.

changhaonan commented 1 day ago

There are some historical problems in this version of code. So I recommend use the current default parameters combination as I have tested. export_ply is set to False, as currently we found point-based MLLM is very weak. We have this option, because we explored this direction. But we deprecated this support in the final version. You can set it to True if you want to use the point cloud. But it takes longer time to generate the data.

sd is the short for stable-diffusion. You can use ControlNet-SD1.5 to augment the generated images. Then this switch branch becomes useful. We provide the code for stable-diffusion augmentation. But this is optional.

I am not sure if it is normal, actually. Can you zip the output folder of 103971? And I can check it.

3202336152 commented 1 day ago

If export_ply=False, then executing the partnet_label.py code will result in an error because the pointclouds file and the npy_8192 file are empty. pcd_file = os.path.join(pcd_folder, f"{image_idx:06d}.ply") npy_folder = pcd_folder.replace("pointclouds", "npy_8192")

changhaonan commented 1 day ago

It should be able to get around that.... As I do tested everything on my own server... If you look at the following code, if the pcd_file doesn't exits, it will generate a empty pcd. pcd = np.zeros((8192, 3), dtype=np.float32). As I said, pcd is not really being used in A3VLM, so it doesn't matter.

3202336152 commented 1 day ago

Thank you for your reply. I should know all the solutions for data generation. If you have other questions, I hope to continue to ask you.