Closed JuanDavidG1997 closed 4 years ago
Hi, if you want to customize a new supernet (search space) on top of a UNet structure, you can build the new supernet in model_search.py
(in the class NAS_GAN
). If you want to change the operator choices for each searchable cells, you can add your customized operators in operations.py
. Hope this is helpful to you.
Thanks. Sure it is. A question remains. How do I load the generator I want to compress? I mean: I already trained my GAN and found the "best" generator. I would like to use AGD to compress this generator no matter what the compressed structure is (conv, Unet, other). I don't know if I'm being clear enough.
Yes I think I get the point. You have two ways. First, you can follow the AGD_ST (for unpaired image translation) which applies the pretrained GAN on all the train/test images before searching to generate a target dataset as the labels (the teacher signal) so that you can directly apply perceptual loss or MSE loss between the generated images of the student model and label images for distillation (see the code for reference). Second, you can follow AGD_SR (for super resolution) which applies the pretrained GAN (teacher model) on the same input to the student model during training to generator the target images on the fly for distillation (see the code for reference).
Perfect. I will take a look at it and try it how you explained. Thank you!
Hi. I would like to get you insights in which of your works might help me better with my task. I just saw you work on GAN Slimming. Deeper explanation of my task. I trained a GAN with a UNet generator aiming at reconstructing partially detetiorated fingerprints (burnt, skin illness, others). This generator is suppossed to be deployed on a RaspberryPi-like device, which means I need to compress it in some way in order to fit it in memory-constraint devices and even (if possible) speed up. My guess is that your GAN Slimming repo woould be the best solution, but I would really appreciate your thoughts. Thank you and sorry to bother
Hi, I think both of the two repos can be you choices.
Thanks a lot. I will consider using both and pick the best result. I see that using GAN slimming might be easier. I'm having a real hard time getting through AGD's code trying to figure out how to use it in my case. Do you happen to have any tutorial or any advices about how to do it?
I think you can get familiar with the codebase of DARTS first which has a similar code structure as mine, then it may be easier for you to follow my code.
You can contact me at yf22@rice.edu on email if you have more questions. I will close this issue, thanks.
Hello!
First of all awesome code and article. I would like to test this method in a UNet GAN used for fingerprint reconstrtuction. It is unclear to me how to induce the use of a new generator. I see some lines in some .py files are commented. I'd be gratefull if you could guide me a bit through the process.
Thanks a lot