andrewjong / SwapNet

Virtual Clothing Try-on with Deep Learning. PyTorch reproduction of SwapNet by Raj et al. 2018. Now with Docker support!
377 stars 117 forks source link

How to test it ? #13

Closed bbertolucci closed 4 years ago

bbertolucci commented 4 years ago

I am sorry I am still a beginner in DL, and I am not sure how to test it. By testing I mean : 1) I have a first original image with someone wearing clothes, 2) I have an image with the isolate cloth I want to swap 3) I have a destination image where someone is wearing another cloth. 1 should correspond to the cloth folder 2 should correspond to the texture folder 3 should correspond to the body folder

But with this what command should I do ? And where to find the result image ? ( 3) cloth should be swap by 2) ) Also 1) 2) 3) are jpeg image with original picture and not segmented one ?

Thank you for your splendid work !

andrewjong commented 4 years ago

Hello! Happy to hear you're interested. The process is actually quite complicated, and I think is one of the drawbacks to SwapNet (took quite a lot of effort to even get this repository up).

First of all, you need trained SwapNet models: Warp and Texture stage. If you see this thread, I'm currently preoccupied and the earliest I can personally train these is late December. If you have your own compute power, I suggest training on the Deep Fashion dataset yourself.

Second, SwapNet requires preprocessing the input into a body segmentation and cloth segmentation before it is sent to the Warp and Texture models. I forked two other repositories to allow this to happen, this for body, and this for cloth.

In my opinion, this complexity makes SwapNet impractical for real use. I am planning to do research in virtual try-on myself to address these issues. Hoping to get somewhere with that in ~4 months.

bbertolucci commented 4 years ago

Hi, Thank you for your quick answer ! I am currently training Deep Fashion dataset myself. Ok I understand we need body segmentation and cloth segmentation, I will try to do it. And I suppose I also need to calculate normalization statistics ? So I understand the 3 files:

 python inference.py --checkpoint checkpoints/deep_fashion \
   --cloth_dir [SOURCE] --texture_dir [SOURCE] --body_dir [TARGET]

Where SOURCE contains the clothing you want to transfer, and TARGET contains the person to place clothing on.

But it doesn't answer my question : where to find the output ? ~ The body segmented photo with the new clothes.

I think we can use it for real. It's using computation for training but once it's done it should be quick to give a result.

Also how do we resume an aborted training epoch ?

andrewjong commented 4 years ago

Oh, looks like I forgot to mention that option in the readme. Set --results_dir to choose where to output to. By default it goes to results/. See the various test/inference options here: https://github.com/andrewjong/SwapNet/blob/master/options/test_options.py

For normalization statistics, you can run the tool I made here: https://github.com/andrewjong/SwapNet/blob/master/util/calculate_imagedir_stats.py

bbertolucci commented 4 years ago

Oh.. Sorry I didn't see it ! Thank you.

I will continue to follow your work with attention, it's a really good project and you did it very well !

andrewjong commented 4 years ago

Thank you for the kind words :)

jaggernaut007 commented 4 years ago

this thread helps! Thanks!

team5-acnps commented 4 years ago

Hello, can you please guide me on how to test the output what type of files must be exactly placed? inference.py: error: unrecognized arguments: --cloth_dir cloth/images1.jpg I am facing the above error I used command as python inference.py --checkpoint checkpoints/deep_fashion \ --cloth_dir cloth/images1.jpg --texture_dir txture/images.jpg --body_dir body/images2.jpg I used images please help me out I am a beginner