mit-gfx / diffmat

PyTorch-based differentiable material graph library for procedural material capture
Other
106 stars 14 forks source link

can't match a real image #9

Open vinowan opened 6 months ago

vinowan commented 6 months ago

just run some test

target image is bricks

and run the following command:

python .\test_optimizer.py .\sbs\match_v1\bricks.sbs -e -im bricks.jpg

python .\test_hybrid_optimizer.py .\sbs\match_v1\bricks.sbs -e -im bricks.jpg -m combine  -ip .\result\bricks\optim_bricks\checkpoints\optimized.pth -o bricks_hyper 

optimized image is optimized

The simulated annealing algorithm seems to be unable to find the correct scale parameters.

Polar1s commented 6 months ago

Hi @vinowan. Could you add -a grid to the command that calls the hybrid optimizer? The mixed-integer optimization algorithm used in our paper is coordinate descent + grid search, not simulated annealing.

mfischer-ucl commented 4 months ago

Hi @Polar1s , thanks for the great work! A question regarding the above issue: I tried running the script with the -a grid command proposed by you. It is taking forever (6 hrs in, and still running) - is it intended that there are 10k gridsearch iterations, and during each gridsearch iteration there are 100 linesearch iterations per parameter? For the brick example above, this would create 47 parameters 100 linesearch iters 10k gridsearch iters, leading to 47 million iterations? I suspect these many renderings are what increase the runtime so much. Thanks :)

Polar1s commented 3 months ago

Hi @mfischer-ucl

Sorry for getting back late and thank you for providing the details of your experiment. The optimization time indeed scales with the number of renderings. 10k grid search iterations and 100 line search iterations are apparently way too much.

For your reference, we only ran 5 grid search iterations with 100 line search iterations (if I recall correctly) for the results in the paper. We observed that the coordinate descent optimizer typically converged after ~5 grid search iterations.

Meanwhile, a large number of parameters (especially continuous ones) also adds to the overall time cost. I would suggest exposing the subset of parameters you're mostly interested in and limiting the optimization to those exposed parameters. Please let me know if you run into further issues. Have a great day!