tinnunculus / Styled-Attention-Face-Super-Resolution

Face super resolution
9 stars 0 forks source link

Dataset preperation guide #1

Open LexCybermac opened 4 years ago

LexCybermac commented 4 years ago

Is there a particular fashion in which a dataset such as FFHQ needs to be prepared before passing as an argument when running train.py?

I did a quick test just pointing it at a folder full of the FFHQ images but it appeared to want to find a sub-directory named "valid" within that folder, this has me inclined to believe that the data must be prepared in a particular fashion that is not yet detailed within the existing documentation.

Some insight here would be much appreciated 😄

tinnunculus commented 4 years ago

@LexCybermac , Hello.

I found some errors in train.py, prepare_dataset.py and fixed them. so please try it again. you should train it in cuda gpu environment and i recommend to do at least for 6~9 epochs. (in RTX-2070-super, 3 hours per epoch)

if you have any other problems, please let me know! Thank you~😄

LexCybermac commented 4 years ago

When running the command:

python train.py --data-path F:\Styled-Attention-Face-Super-Resolution\Dataset --low-image-size 128 --log-interval 1 --scale 4

The following traceback is thrown:

  File "train.py", line 251, in <module>
    main()
  File "train.py", line 246, in main
    train(args)
  File "train.py", line 80, in train
    fake_image = G(x, z)
  File "C:\Users\Lex\MiniConda3\envs\safsr\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "F:\Styled-Attention-Face-Super-Resolution\model\model.py", line 257, in forward
    out = self.upsample_1(out, style)
  File "C:\Users\Lex\MiniConda3\envs\safsr\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "F:\Styled-Attention-Face-Super-Resolution\model\model.py", line 201, in forward
    out = self.noise1(out_1, noise_1)
  File "C:\Users\Lex\MiniConda3\envs\safsr\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "F:\Styled-Attention-Face-Super-Resolution\model\model.py", line 84, in forward
    return image + self.weight * noise
RuntimeError: The size of tensor a (32) must match the size of tensor b (8) at non-singleton dimension 3

I'm presuming this is something to do with value of the low-image-size parameter passed along but couldn't say for sure.

tinnunculus commented 4 years ago

Yeah, you are right. I have not prepared for scale 4 and low image size 128 yet. You can run it just only for scale 8, low image size 32... Sorry for that and i plan to generalize from scale, image size.

LexCybermac commented 4 years ago

Aha I see, my bad then. I assumed because they were both mentioned as parameters in the documentation that I'd be good to go and use them.

For now I'll try the default values of 32 and 8 for resolution and scale respectively :D

tinnunculus commented 4 years ago

I recommend you to split FFHQ Dataset into train and test. after training, test the model in test dataset. It`ll show better result than real-world image datafile.

command ex) python train.py --train-test 1 --gen-path ~\test_dataset\69005.png