-
["Improving GAN Training with Probability Ratio Clipping [PPO] and Sample Reweighting", Wu et al 2020](https://arxiv.org/abs/2006.06900)/["Top-_k_ Training of GANs: Improving GAN Performance by Throwi…
gwern updated
3 years ago
-
Hi all,
I am training style GAN 3 on my own data, some organic shots.
I used to train style GAN 2 ADA too.
I am trying to understand a bit the added parameters to the generation of style GAN 3,…
-
A large number of [examples](https://github.com/apache/incubator-mxnet/tree/v1.7.x/example) in the mxnet official repo is using the Module APIs for training. Since the Module APIs will be removed in m…
-
I was going through your research paper and was parallelly going through the codes. **I was not able to find how you used UNIT(cycle gan) to perform style transfer and then content manipulation.**
Ki…
-
创造字体一直是一件难事,中文字体更难,毕竟汉字有26000多个,要完成一整套设计需要很长的时间。
而 Github 用户 kaonashi-tyc 想到的解决方案是只手工设计一部分字体,然后通过深度学习算法训练出剩下的字体,毕竟汉字也是各种「零件」组成的。
于是,作者将字体设计的过程转化为一个“风格迁移”(style transfer)的问题。他用两种不同字体作为训练数据,训练了一个神…
-
Hi, I was kind of confused why the self.G_loss only contains the feature loss and the style loss?
If so, i think it might not be a GAN's advresarial training process, because you only update the par…
-
I am using train_dcgan.py to train the network over outdoor_64 images. However, after about 10 epochs of training, the generated images by the network become gray. Here is an example output file:
![…
-
I would like to categorically state that this Pape**r "StyleNet: Generating Attractive Visual Captions with Styles"** from Microsoft is **non-reproducible**. This is not just from the code based on th…
-
### 上周工作
1.这篇论文Multi-Content GAN for Few-Shot Font Style Transfer翻译完了。
2.看了李宏毅的关于GAN网络的部分视频。
3.看了[深度 | 从入门到精通:卷积神经网络初学者指南(附论文)](https://mp.weixin.qq.com/s?__biz=MzA3MzI4MjgzMw==&mid=2650717691&idx=…
-
Hello, I'm trying to invert real images to the W latent space. I already used IDInvert but the output of their W is 14x512, while the style GAN accepts latent codes of size 18x512.
Do you have the…