Evolving-AI-Lab / ppgn

Code for paper "Plug and Play Generative Networks"
MIT License
540 stars 102 forks source link

where is the training code? #2

Open zdx3578 opened 7 years ago

zdx3578 commented 7 years ago

i see there only test code ,training config can open?

gcr commented 7 years ago

Agreed. How is it possible to train the generator model?

anguyen8 commented 7 years ago

@zdx3578 @gcr Hey guys, sorry for the late response.

Please find the training code of the Noiseless PPGN-h attached (not so well documented): http://www.cs.uwyo.edu/~anguyen8/share/train_upconv_noiseless.tar.gz

You'd have to replace a few symlinks (to lmdb datasets and encoder.caffemodel) with your own datasets and encoder networks (or you can take Caffe BVLC reference as we did in the paper).

Note that it takes ~12 days to fully train this net on ImageNet on a single TitanX using Caffe. If I were to start this from scratch, I'd do it in Tensorflow now at least to harness the multi-gpu training functionality.

Feel free to ask if you have questions.

gyingqiang commented 7 years ago

Hi, when i run your code:http://www.cs.uwyo.edu/~anguyen8/share/train_upconv_noiseless.tar.gz I get a error in Caffe layer's type,Eltwise. It must have two blobs as inputs.but yours only have one,it doesn't seem to work.Can I remove this layer?

anguyen8 commented 7 years ago

@gyingqiang : you could use this Caffe version of mine for the best compatibility: http://www.cs.uwyo.edu/~anguyen8/share/caffe_upconv.tar.gz

gyingqiang commented 7 years ago

thank you very much

Hidden-dreamz commented 7 years ago

What line of code do i have to run to train? and how should i prepare my image set? like file size/name/folders any help would be appreciated!

chuanzihe commented 6 years ago

@anguyen8 hi may i ask about your insights on discriminator design?


    # Push real images to D
    D.net.blobs['data'].data[...] = img_real
    D.net.blobs['label'].data[...] = np.zeros((batch_size,1,1,1), dtype='float32')
    D.net.blobs['feat'].data[...] = feat_real

    # Run D on the fake data
    D.net.blobs['data'].data[...] = img_fake
    D.net.blobs['label'].data[...] = np.ones((batch_size,1,1,1), dtype='float32') 
    D.net.blobs['feat'].data[...] = feat_real 

why would we want feat_real also as an input in this discriminator, instead of only feed it with real and fake images?

Thanks for any answering.

anguyen8 commented 6 years ago

@clairehe : it was one of the tricks we tried in the early days in order to condition this GAN on features. I did not help much though (so not reported in paper).