andrewjong / ShineOn-Virtual-Tryon

Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.
130 stars 31 forks source link

[1.1] MultiSPADE Generator, WITH adversarial loss, on a SINGLE frame, small multiscale weight #80

Closed andrewjong closed 3 years ago

andrewjong commented 4 years ago

Description

Reason:

Planned Start Date: 9/1/2020 Depends on Previous Experiment? Yes, followup of Experiment 1.0

Train Command

python train.py \
--name "multiSPADE-generator_with-adversarial-loss_1-image-only_small-multiscale-weight" \
--model sams \
--gpu_ids 2,3,4 \
--ngf_pow_outer 6 \
--ngf_pow_inner 10 \
--n_frames_total 1 \
--batch_size 8 \
--workers 4 \
--vvt_data data \
--val_check 0.2 \
--wt_multiscale 0.05 --wt_temporal 0.001

Report Results

To report a result, copy this into a comment below:

# Result Description
<!--- 
For Experiment Number, use "Major.minor.patch", e.g. 1.2.0.
Major.minor should match the [M.m] in the title. 
Patch describes a bug fix (change in the code or branch).
-->
**Experiment Number:** 1.2.0
**Branch:** `master`
**Timestamp:** MM/DD/YYYY 9pm PT
**Epochs:** 

# Architecture
**Model Layers:**
<!-- Paste the printed Model Layers -->

**Module Parameters:**
<!-- Paste the Params table -->

# Loss Graphs
<!--- Put detailed loss graphs here. Please include all graphs! -->

# Image Results
<!--- Put detailed image results here. Please include all images! Multiple screenshots is good. -->

# Comments, Observations, or Insights
<!--- Optional -->
veralauee commented 4 years ago

Result Description

Experiment Number: 1.1.0 Branch: master Timestamp: 09/02/2020 10am PT Epochs: 4

Loss Graphs

training loss image

validation loss image

Image Results

training image: image image image

validation image: image image image

image image image

andrewjong commented 4 years ago

Based on the loss graphs, I think we could scale down Multiscale adversarial weight even more. Maybe to 0.02 or 0.01. Thoughts @gauravkuppa ?

gauravkuppa commented 4 years ago

I see that adversarial loss is decreasing at a lesser rate. It seems that it is not taking control of the loss value as we wanted. Is the goal to stabilize adversarial loss more?

Also, sidenote: your intuition behind temporal loss was right. Seems to be working now

andrewjong commented 4 years ago

Goal would be to bring adversarial loss to about the same magnitude as L1/Vgg, or even a bit less.

gauravkuppa commented 4 years ago

In that case, I think it makes sense to scale adversarial loss weight down.

veralauee commented 4 years ago

Result Description

Experiment Number: 1.1.0 (finished training from earlier result) Branch: master Timestamp: 09/02/2020 9pm PT Epochs: 10

Loss Graphs

train loss image

val loss image

Image Results

train image image image

val image image image

andrewjong commented 4 years ago

Observation:

Hypothesis:

Related Issues:

Code changed:

New Experiment:

@veralauee please run the same command/experiment again, Experiment number 1.1.1.

You will have to follow the install instructions again (delete the current environment, recreate and switch to the new one). Follow the install instructions here.

veralauee commented 4 years ago

Result Description

Experiment Number: 1.1.1 Branch: master Timestamp: 09/08/2020 2pm PT **Epochs: 01

Loss Graphs

train image

val image

Image Results

train image image image

Val image image image

veralauee commented 4 years ago

Result Description

Experiment Number: 1.1.1 Branch: master Timestamp: 09/08/2020 2pm PT Epochs: 05

Loss Graphs

train image

val image

Image Results

train image image image

val image image image

andrewjong commented 4 years ago

Further progress blocked by #88 (figure out why overfitting)