Closed vineethbabu closed 5 years ago
Same doubt here. Please, can you describe a little how to create frame_data.pkl ?
you will need to understand and connect the projects yourself.
a) if you need texture coordinates write vt and ft. b) please check sample data. you will need to write the pickle file yourself.
Write vt and ft from where ? If u open source then please be prepared to answer genuine queries. The same question has been asked by atleast 2-3 ppl.
Open sourcing my work does not imply the obligation to explain it. Especially not answering questions that can be solved with a Google search, by reading the paper, or by looking into the code. Contrary I am under no obligation to open source my code.
vt and ft are provided in the assets folder.
Ur responses generally range from rude to not useful. If u are under no obligation then why do it? Esp when the open dev community can't make any meaningful use of ur work? If the solution was a simple Google search then why do u think so many ppl have that issue query ? And if u know the answer why can't u atleast guide ppl to the solution rather give vague responses.
Look through the questions I get asked.n 90% can be answered by one of the 3 ways I described.
Being a good member of the open source community does also mean to debug yourself, contribute to the project, and ask specific questions. Think about it.
Hello again,
I have generated the frame_data.pkl by using the vertices information that octopus.py gives as a result in function predict (self, segmentations, joints_2d) . The result is not correct:
and does not match with the sample:
Please, can you tell me what am I doing wrong? these vertices I am using does not match with the ones showed in the sample and I am wondering if any offset must be added to these vertices...
Thank you,
Optimize longer. Your mesh is not well aligned with the images.
Hello,
thank you for your answer. I used the default optimization steps defined in Octopus:
opt_steps_pose=5 opt_steps_shape=15
Please, can you tell me which values did you use to obtain the vertices used in the sample to generate frame_data.pkl?
Just to replicate the results and see what do I have to consider for next reconstructions.
Thank you again for your help,
I can't recall the exact numbers. You will need to experiment yourself.
Ok, I proceed with more optimization, and the texture is better now:
Then I suppose that the process I followed is correct.
Thank you,
@jgallegov Nice ! Looks lot better.
Default is 10 for pose and 15 for shape.What settings u used this time ?
Hello,
yes, the results are far better now. The parameters I used:
opt_steps_pose=20 opt_steps_shape=30
More experimentation is needed but I hope these parameters will help
Thanks,
Great! Will try that out.
Thanks! @jgallegov
Hello, could you tell me how to get the "vertices" from Octopus? Thank you! @jgallegov
Hello,
I have generated the frame_data.pkl by using the vertices information that octopus.py gives as a result in function predict (self, segmentations, joints_2d). The way to do it is:
in ifer_single.py, go to line 64: write_mesh('{}/{}.obj'.format(out_dir, name), pred['vertices'][0], pred['faces'])
-After this line , add the following:
width= 1080
height = 1080
camera_c = [540.0, 540.0]
camera_f = [1080, 1080]
vertices = pred['vertices']
data_to_save={'width':width,'camera_c':camera_c,'vertices':vertices, 'camera_f':camera_f, height':height}
pickle_out = open("frame_data.pkl","wb")
pickle.dump(data_to_save, pickle_out)
pickle_out.close()
print('Done.')
Run the code again, and you will obtain frame_data.pkl. That's all. I hope it helps.
Thank you again to @thmoa and his group for their help and talent.
Jaume
Hello, I did as you told and It worked, Thank you very much! @jgallegov
Hello,
I have generated the frame_data.pkl by using the vertices information that octopus.py gives as a result in function predict (self, segmentations, joints_2d). The way to do it is:
in ifer_single.py, go to line 64: write_mesh('{}/{}.obj'.format(out_dir, name), pred['vertices'][0], pred['faces'])
-After this line , add the following:
width= 1080 height = 1080 camera_c = [540.0, 540.0] camera_f = [1080, 1080] vertices = pred['vertices'] data_to_save={'width':width,'camera_c':camera_c,'vertices':vertices, 'camera_f':camera_f, height':height} pickle_out = open("frame_data.pkl","wb") pickle.dump(data_to_save, pickle_out) pickle_out.close() print('Done.')
Run the code again, and you will obtain frame_data.pkl. That's all. I hope it helps.
Thank you again to @thmoa and his group for their help and talent.
Jaume
When you taking your own videos,What are the camera parameter values?
I am using Octopus and in the last line of infer_single.py we get the pred which is then written out as obj. but, we are only writing faces and vertices.
a) is there something else like vt and other information that we need to write into the obj? shd we use some vt values from the basic model npy file for this step?
b) how to create the frame_data.pkl so that it matches the obj that comes out of Octopus.
both these parts is not very clear. otherwise, your codes for texture and Octopus are deployed and running for default data only.