taki0112 / UGATIT

Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)
MIT License
6.17k stars 1.04k forks source link

The course of true love never did run smooth #14

Closed opentld closed 5 years ago

opentld commented 5 years ago

I just want to ask, at this speed, can I see the end of training in my lifetime?

image

Waiting for the pretrained model is wise for me, maybe... @taki0112

hitechbeijing commented 5 years ago

Please turn down Iteration, Epoch and Batchsize

1mpossibleHacker commented 5 years ago

Can you please tell me how did you train it? I cant train it.

hitechbeijing commented 5 years ago

I have not get the best result yet and it will not take your lifetime, if use Intel Core i7 without a deep learning GPU, it will take up to 1 year.

opentld commented 5 years ago

@hitechbeijing

Please turn down Iteration, Epoch and Batchsize

I'm using the default setting, epoch=100,iteration=10000,batch_size=1

p.s. 1 year is too long to me...

hitechbeijing commented 5 years ago

@hitechbeijing

Please turn down Iteration, Epoch and Batchsize

I'm using the default setting, epoch=100,iteration=10000,batch_size=1

p.s. 1 year is too long to me...

try epoch=10,iteration=1000,batch_size=1

tafseerahmed commented 5 years ago

What datasets are you using?

hitechbeijing commented 5 years ago

What datasets are you using?

selife2anime

tafseerahmed commented 5 years ago

Could you link that?

sdy0803 commented 5 years ago

@hitechbeijing hi, could you link the database u r using?

hitechbeijing commented 5 years ago

I have tested the NVIDIA Tesla V100, it just 4 times faster than desktop CPU. and as faster as intel Xeon gold CPU.

tafseerahmed commented 5 years ago

Cool but could you upload and link the dataset you're using

hitechbeijing commented 5 years ago

Cool but could you upload and link the dataset you're using

see #6 ,and you can use tesla T4 or P4, no need v100, but you must have more than 30G free memory, and 8 core CPU.

HLearning commented 5 years ago

2080ti: each epoch takes 2 hours

hitechbeijing commented 5 years ago

2080ti: each epoch takes 2 hours

increase batchsize?

HLearning commented 5 years ago

2080ti: each epoch takes 2 hours

increase batchsize? light=True, epoch=100, iteration=10000, batch_size=1

hitechbeijing commented 5 years ago

2080ti: each epoch takes 2 hours

increase batchsize? light=True, epoch=100, iteration=10000, batch_size=1

am not turn on light. I think it is the reason for slow.

taki0112 commented 5 years ago
  1. We released 50 epoch and 100 epoch checkpoints so that people could test more widely.

  2. Also, We published the selfie2anime datasets we used in the paper.

  3. And, we fixed code in smoothing

  4. In the test image, I recommend that your face be in the center.