-
## 内容
下記のコマンドを実行し、トレーニングを行なった。
学習データセット:ドメイン3つ、学習データ2500ずつ、テストデータ300
```
!python main.py --mode train --num_domains 3 --w_hpf 0 \
--lambda_reg 1 --lambda_sty 1 --lambda_ds 2 --la…
-
How does the code realize image segmentation and whether the data set used can be provided
-
Thanks for your work!
I am trying to train Asyrp for animal face editing on my own device. For attribute _'Happ Dog'_ , the training setting is as below:
```
sh_file_name="script_train.sh"
gpu="7"…
-
How to view G model effects?
run web_demo.py like this , web only display 3 same pictures,1picture display nothing(be black).
![image](https://user-images.githubusercontent.com/81545966/17826…
-
Hello, I used your settings to train on AFHQ and set the batchsize to 4. When switching to wild animals, the result was very poor. The FID index was still 60 at 80,000 steps.Can you help me answer my …
-
Hi @fnzhan !
Thank you for providing your nice implementation.
I have a question about inputs for networks, especially for a celeba edge case.
Correspondence predictor is given RGB images and…
-
Or how do I calculate it?
-
How to use different datasets? How to arrange it inside data file?
for example I want to transfer cat images to dogs.
-
I re-runned the training with provided dataset and training code, and guess the previous errors are due to mismatch some 'number' between my custom dataset and AFHQ or CELEBA.
Is there any mandator…
-
That is nice work!
I have some concerns about the FID. When we use the training code of the first stage (i.e., training the volume renderer), what is the FID between the 64x64 real images and generat…