Closed pengsida closed 4 years ago
Thanks for your response! For the depth image, I need to tranform it to 256 *** 3 voxel, right?
Could you tell me the concrete inference script?
Is it similar to python generate.py -std_dev 0.1 0.01 -res 32 -m SVR -checkpoint 10 -batch_points 400000
?
Hi,
I updated the test_inference_example.zip file, now you additionally have the voxelization already prepared in the .npz file.
You can run your inference after you trained the model. You should pass the same arguments used for training. E.g. if you trained with -std_dev 0.1 0.01, then yes, also use them for generation. The checkpoint number also depends on your trained model. Use the model with the smallest validation loss. In the folder of your experiment "IF-Net/experiments/Your_Experiment" you will find a file like "val_min=10.npy". It tells you that 10 is your current validation minimum. You can also track your losses using tensorboard.
Also, I will upload an easier to use code for IF-Nets, that handles the script parameters for you. It should be there in the next days.
Best, Julian
Cool, thank you very much!
The reconstruction results of your approach are amazing! Could you provide an inference example for single-view human reconstruction?