Open saudades18 opened 1 year ago
Hi @saudades18! Could you please describe how you launch the code on 40 images?
thanks for your reply @VanessaSklyarova .i just change the monoculardataset and add [:40] at the end of string in 330-340
And i use h3ds dataset to test, howerer, during the training, it still occur "NaN during backprop was found, skipping iteration", is it normal?
@saudades18 did you face any uploading weights error in https://github.com/SamsungLabs/NeuralHaircut/blob/main/src/models/dataset.py#L378 and did you use configs from ./example?
Yes, "NaN during backprop was found, skipping iteration" is okay if it happens not frequently.
@VanessaSklyarova i did not face uploading weights error, however "NaN during backprop was found, skipping iteration" happens frequently, about 90%.
And i use configs from ./example, i just follow the command in https://github.com/SamsungLabs/NeuralHaircut/tree/main/example/readme.md
And when i using h3ds dataset, the hair is optimized. but when using the data given(this time all images), the loss still was not decreased, and the hair primitives were still long and straight.
Could you please provide the checkpoint for the second stage?
Thanks for your reply :)
@saudades18 Could you set https://github.com/SamsungLabs/NeuralHaircut/blob/main/configs/example_config/hair_strands_textured.yaml#L54 to false, change https://github.com/SamsungLabs/NeuralHaircut/blob/main/configs/example_config/hair_strands_textured.yaml#L77 and https://github.com/SamsungLabs/NeuralHaircut/blob/main/configs/example_config/hair_strands_textured.yaml#L78 to 0. and check if the loss decrease with time? (You could try it on 40 images as well)
@VanessaSklyarova Thanks for your kind reply! Now the hair is optimized, and the loss is decreasing except hair_L_diff loss. And before most of the render images are black, maybe this is the reason? So how can i train successfully with rendering? Maybe first train without render loss, and get a approximate hair shape, and then use all loss to refine? will it help?
@saudades18 Yes, it should be more stable if you start rendering after approx. 1000 steps, but still it is very strange why it doesn't work from the beginning. I didn't face such rendering problems before when checked on different scenes, so I'll have a look at this.
@VanessaSklyarova i start rendering after 1000 steps, but it seems like the render images are still not optimized. And "Nan" still happens frequently so the iteration is skipped. Below is one of the render images, and it looks like rasterize image rather than real rgb.
hi, can someone help me to solve this error? i try to train the second stage using the data you provide(because the memory limitation, i use 40 images ), and the loss isn't decreasing.