GAP-LAB-CUHK-SZ / InstPIFu

repository of "Towards High-Fidelity Single-view Holistic Reconstruction of Indoor Scenes" ECCV2022
104 stars 9 forks source link

How to get the prepared data #13

Open Anonymous789s opened 1 year ago

Anonymous789s commented 1 year ago

Hi ! I have a great regard for your excellent work! Recently, I'm trying to get the test result with your code on 3D-FUTURE. I've noticed that you have also tested on 3D-FUTURE dataset in your paper. So I wonder how to get the processed data like those in prepare_data you offered. Did you also cantact 3D-FRONT team to render the 3D-FUTURE dataset? If I try to contact them,except the dataset , what else should I prepare in advance? Thanks a lot !!!

HaolinLiu97 commented 1 year ago

Hi, the 3D-FUTURE dataset is actually the CAD models used in the 3D-FRONT dataset. The color image in the prepared data is rendered by the 3D-FRONT team, they have a physical based rendering engine to render nice image. For other prepare data, they are more relevant to object detection, and can be produced by using similar approach to TOTAL3D and IM3D. But I will release my own preprocessing scripts in a few days, since many people are interested in it.

HaolinLiu97 commented 1 year ago

ps. the 3D-FUTURE dataset can be downloaded on their website.

Anonymous789s commented 1 year ago

Thanks a lot for your reply and sharing !!!

Anonymous789s commented 1 year ago

Sorry to bother you again,I've noticed that 3D-FUTURE dataset has included the rendered image(png like this [https://kdocs.cn/l/cjLhUfzlPRjG]) and its idmap [https://kdocs.cn/l/chO88VKz1nP3](different objects in a scene are colored in different colors),but I'm stuck in generating the prepare data in your format.If it is possible,could you please explain the detailed steps to get files like those you offered in your projects~Thanks! For example,what contents are included in your "pkl" files(which are in the prepare_data folder)?Did you obtain them from the rendered image?And how to get the "inside_ponits.obj" and "outside_points.obj" in occ folder and how to get the mask? I'm trying hard to get the similar prepare data using 3D-FUTURE dataset,and I wonder if I can directly get them in format with yours via object-detection network or I should write the relevant format-translation code? Thanks a lot !!!

HaolinLiu97 commented 1 year ago

I am currently preparing the codes, and it should be released soon. However, my codes will generate prepared data without the images and depth (since the original images have large size), but you can refer them to the prepare data that I already released.

Anonymous789s commented 1 year ago

OK!Thanks a lot for your help!

Anonymous789s commented 1 year ago

Thanks for releasing the scripts of data_preparation!🌹 And sorry to bother you again cause I'm a little confused with the script 'preprocess_layout.py'. In this script, "3dfront 2d data"(which includes desc.json and depth.png) is used as parameter 'data_root', but I didn't find them in 3D-FRONT dataset.Did you get it from 3D-FRONT team or process the raw 3D-FRONT dataset to get it? Thanks!!!

HaolinLiu97 commented 1 year ago

@Anonymous789s Hi, The 3d-front-object.zip contains only the desc.json file. The original high resolution data is too large upto several hundreds GBs, which is prepared by the 3D-FRONT team. However, for the prepared data we already released contains depth images with descent resolution. Therefore, you can extract the depth images from them, and put it under the same directory where the desc.json are according to the image id, which forms 3dfront_2d_data as required in preprocess_layout.py. But later, I will consider slightly reduce the image resolution, and release the original dataset.

Anonymous789s commented 1 year ago

Thanks a lot for your patient reply!🌹 But I find it really hard to generate the same-format data(especially the pkl file in the folder 'prepare_data') from the raw 3D-FUTURE dataset step by step,cause I lack much relevant data about 3D-FUTURE dataset.😭 So I wonder whether the testing command is same for different datasets.And if it is the same,would you consider uploading the testing data later for other datasets like 3D-FUTURE or pix3d?(I just want to test on different datasets with the pretrained weight to get a rough testing result,without training) Thanks again!!!

HaolinLiu97 commented 1 year ago

@Anonymous789s I will consider release the 3d front images in a few days, which refer to the 3d_front_2ddata, so that it will be more easier to generate processed data. For the pix3d dataset, the preprocessing is slightly difference, and the training code is also slightly different. I will release them ASAP, since I am currently focusing on ICCV.

Anonymous789s commented 1 year ago

Okay,Thank you very much!!! ps.It seems that the link for '3d-front-object.zip' in the README.md is the same with '3d-front-layout.zip'.And '3d-front-object.zip' is not in the OneDrive Shared Folder.You can check it when releasing the 3d front images later in case you forgot to upload it.🌹

sirine90 commented 1 year ago

Thank you for releasing the data preparation code. It would be very helfpful if you could provide a link for '3d-front-object.zip'.

Thanks!

ZackChenUuu commented 1 year ago

@UncleMEDM Hi guys,nice work! Does the desc.json include the information of descent-resolution images? Will you consider to release scripts for generating desc.json and the depth image from the raw images? Thx!