Closed andrewjong closed 3 years ago
Experiment Number: 1.1.0
Branch: master
Timestamp: 09/02/2020 10am PT
Epochs: 4
training loss
validation loss
training image:
validation image:
Based on the loss graphs, I think we could scale down Multiscale adversarial weight even more. Maybe to 0.02 or 0.01. Thoughts @gauravkuppa ?
I see that adversarial loss is decreasing at a lesser rate. It seems that it is not taking control of the loss value as we wanted. Is the goal to stabilize adversarial loss more?
Also, sidenote: your intuition behind temporal loss was right. Seems to be working now
Goal would be to bring adversarial loss to about the same magnitude as L1/Vgg, or even a bit less.
In that case, I think it makes sense to scale adversarial loss weight down.
Experiment Number: 1.1.0 (finished training from earlier result)
Branch: master
Timestamp: 09/02/2020 9pm PT
Epochs: 10
train loss
val loss
train
val
Observation:
Hypothesis:
Related Issues:
Code changed:
New Experiment:
@veralauee please run the same command/experiment again, Experiment number 1.1.1.
You will have to follow the install instructions again (delete the current environment, recreate and switch to the new one). Follow the install instructions here.
Experiment Number: 1.1.1
Branch: master
Timestamp: 09/08/2020 2pm PT
**Epochs: 01
train
val
train
Val
Experiment Number: 1.1.1
Branch: master
Timestamp: 09/08/2020 2pm PT
Epochs: 05
train
val
train
val
Further progress blocked by #88 (figure out why overfitting)
Description
Reason:
Planned Start Date: 9/1/2020 Depends on Previous Experiment? Yes, followup of Experiment 1.0
Train Command
Report Results
To report a result, copy this into a comment below: