jerryli27 / TwinGAN

Twin-GAN -- Unpaired Cross-Domain Image Translation with Weight-Sharing GANs
Apache License 2.0
719 stars 99 forks source link

Inference model trained on Multiple GPU #21

Open veya2ztn opened 5 years ago

veya2ztn commented 5 years ago

For multiple GPU train, the saved model ( in .meta file) is split to two clones. So all the tensor names are changed by default like source_ph --> clone_0/ source_ph custom_generated_t_style_source --> clone_0/custom_generated_t_style_source So, if anyone want to eval or inference on its own single GPU machine, please be careful when the pre_trained model is base on multiple GPU. I recommend use this inference code

python inference/image_translation_infer.py \
--model_path="/PATH/TO/CHECHPOINT" \
--image_hw=128 \
--input_tensor_name="clone_0/sources_ph" \
--output_tensor_name="clone_0/custom_generated_t_style_source" \
--input_image_path="/PATH/TO/INPUT" \
--output_image_path="/PATH/TO/OUTPUT" \

And by the way, I am wondering why the inference speed is soooooo slow. Loading the weight take 5-10 second on my 1080Ti

jerryli27 commented 5 years ago

Thank you for giving pointers to changes during multi-gpu inference! I think the speed to load weights sounds acceptable given model size and hdd data reading speed. If you want faster inference, you can change the code to do batching.