Closed mahsa1363 closed 3 years ago
Hi, thanks for using our code. The point clouds take up too much space thus difficult for us to share them on google driver, as well as the extracted 3D features, so we provide the self-extracting manuscripts for the users. Is it possible for you to use some online GPU services like the google cloud platform? Now we are also working on constructing a lite version of DJ-RN relying on limited resources, but it may take a while to make it work.
Hi Thank you very much for your code. I already used the online google colab that were very slow. Of course, I did not use them for this purpose. I will definitely use it. Is it possible for you to put some of these cloud points in Google drive, for example, the volume of one tenth of them?
With your help, I was able to move forward to step 4 of 'Data Generation' phase. For me the step 4 of 'Data Generation' (Run SMPLify-X on the dataset with the filtered pose) is very time consuming. Maybe if I have the .pkl files and meshes files of SMPLify results, I will can perform step 5 (Assign the SMPLify-X results to the training and testing data) myself. The meshes and .pkl files are smaller in size. Is it possible for you to put these in Google Drive?
Sorry for the late reply! The required files take ~100G space in our local server, which is still too large for us to upload them to the Google Drive.
Sorry to bother you. Even pkl and mesh files take up a lot of space? For me, the pkl and mesh files of each image takes up about 1 MB space, which is 27 GB for 27,000 training images. Is it not possible for you to send a smaller volume of them? If possible, send only the mesh and pkl files that are smaller.
We didn't contain the image files when calculating. And since different image contains different number of samples, many images actually cost more than 1MB, which is too large for us to upload them to the Google Drive.
You are absolutely right. In any case, I do not have 100 G of space on my computer to run myself. Excuse me, I have a question, For the "3D Human-Object Interaction Volume Generation and Visualization" step just need to have the files of vertex_path_GT.txt,vertex_path_Neg.txt and vertex_path_Test.txt ? That is, all of the above steps(Data generation step) are done to produce a file that comes with all the point clouds of the overall images and use it in the "3D Human-Object Interaction Volume Generation and Visualization" step. If yes, so, is it possible to send me the files of vertex_path_GT.txt, vertex_path_Neg.txt and vertex_path_Test.txt? If it is possible for you please help me
@mahsa1363 can you please share with me vertex_path_GT.txt ?
@Foruck how can we exactly reproduce the results in your CVPR paper without having access to vertex_path_GT.txt
, vertex_path_Neg.txt
, and vertex_path_Test.txt
files?
Thanks for your pretty code Detailed 2D-3D Joint Representation for Human-Object Interaction. I do not have a GPU and I can not get point clouds with my PC and it is not possible for me to run step 4 of "Data generation" phase. My possibilities are limited. I want point clouds of humans and objects. Were you able to generate the point clouds with low space?