Closed ZouaghiHoussem closed 2 years ago
Testing the model give me black images, any help please ?
First of all, the fake G-buffers are only intended to show how G-buffers need to be fed into the model. The parameters in the config are not at all set up for fake G-buffers and any reasonable results with those. The configs we provide are for using a full set of G-buffers extracted from the game. It's likely that you did nothing wrong here and this is just an artifact of using fake G-buffers you're seeing here. To better understand the issue, I recommend parsing the log file and plotting the entries. 'ds' entries are disciminator losses, 'gs' the generaot losses. 'vgg' is the perceptual loss and a good proxy for how much images are modified. Looking at a couple of samples corresponding to specific values of the loss can help calibrate the current status. 'rdf' and 'rdr' are the percentage of pixels from fake and real images that are classified as real. You want those to be roughly in a range of 0.6 to 0.9.
Hey, I ran into the same problem. @ZouaghiHoussem , I'll be happy to know, how did you solve it? I assume these configs are supposed to be changed:
generator:
type: hr
config:
encoder_type: ENCODER
stem_norm: group
num_stages: 4
other_norm: group
gbuffer_norm: RAD
gbuffer_encoder_norm: residual
num_gbuffer_layers: 3
optimizer:
type: adam
learning_rate: 0.0001
adam_beta: 0.9
adam_beta2: 0.999
clip_gradient_norm: 1000
scheduler:
type: 'step'
step: 100000
gamma: 0.5
Thanks! Eyal
UPDATE: eventually was able to solve it: https://github.com/isl-org/PhotorealismEnhancement/issues/33 BTW, @srrichter - it happens when working with gbuffers as well (other gbuffers, not GTA, I used Carla)
Hope it helps!
UPDATE: eventually was able to solve it: #33 BTW, @srrichter - it happens when working with gbuffers as well (other gbuffers, not GTA, I used Carla)
Hope it helps!
Hello, I'm deploying this amazing project and just notice your reply here, so, you used Carla to get g-buffers directly? If so, could you give me some hints to work woth that? Cause I thought it supposed to do with UE to capture some g-buffers, thanks in advance.
Hallo Pros,
and also maybe to @EyalMichaeli , i mentioned u because i saw u a a few times in issues and that's why maybe u could help me also
So i tried to train it, i follow the steps. and for datasets i just use PlayingFromData. because i am stuck at MSeg, by the MSeg, it took 10 minutes for just 1 image and i got only gray image. because of that, i now have only robust label map (gray images) for folder 01_images and 02_images of Playing for Data
that's why, my dataset: for real image -> 01_images for rendered image -> 02_images
i follow the step, and when i wanted to train. i got the following error:
do u know why i got the num_samples = 0?? thank you very much for the help
I am using my data to train the discriminator, I kept the code as it is and my data images are 960x540, I used your code to simulate the Gbuffers and your code to generate the crops and matching. When training, the data is loaded correctly and starts showing values, but soon after (about 1 hour), all values were replaced by nan, can you help me?