sherwinbahmani / 4dfy

4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling
https://sherwinbahmani.github.io/4dfy/
Apache License 2.0
315 stars 8 forks source link

4dfy result #8

Closed zimingzhong closed 10 months ago

zimingzhong commented 10 months ago

I have the same issue with #6 . I try to generate the 'a_crocodile_playing_a_drum_set' with default setting and seed. i get the result with a_crocodile_playing_a_drum_set. is it the problem about seed?

sherwinbahmani commented 10 months ago

Hi,

Which configs are you using and what code?

zimingzhong commented 10 months ago

Hi,

Which configs are you using and what code? thanks for quick reply. I use the default config with python launch.py --config configs/fourdfy_stage_3.yaml --train --gpu $gpu exp_root_dir=$exp_root_dir seed=$seed system.prompt_processor.prompt="a crocodile playing a drum set" system.weights=$ckpt stage 1 and stage2 show nice static results.

sherwinbahmani commented 10 months ago

So you are using the default config with a 80 GB GPU? Are you using this code base or the threestudio extension? And do you observe the same for other text prompts?

zimingzhong commented 10 months ago

So you are using the default config with a 80 GB GPU? Are you using this code base or the threestudio extension? And do you observe the same for other text prompts?

Yes, 80GB GPU and this code base. i try another case a dancing dog. it looks better but is also limited motion.

sherwinbahmani commented 10 months ago

That's weird. Can you try to change following line in stage 3 in the config:

guidance_single_view: lr: 0.0001

to

guidance: lr: 0.0001

Also did you try higher values for system.loss.lambda_sds_video as mentioned in the README? This will increase the motion a lot but might sacrifice quality.

zimingzhong commented 10 months ago

That's weird. Can you try to change following line in stage 3 in the config:

guidance_single_view: lr: 0.0001

to

guidance: lr: 0.0001

Also did you try higher values for system.loss.lambda_sds_video as mentioned in the README? This will increase the motion a lot but might sacrifice quality.

Thank you for updated information. I will try it again. I think those nice cases in Github Page and Paper are not trained in the same config and seed. Could you show the corresponding configs and seed of some cases on the GitHub Page so that I could get the same results? Stage 3 costs time. It is hard to test too many cases. Could you use this code base and run some cases on the GitHub page, then you can show the results and corresponding seed.

sherwinbahmani commented 10 months ago

The method does not vary much across seeds, so the results were run with seed 0 for the paper and project page. I have attached the config used for the paper. I was able to reproduce the results from the paper with this config before releasing the code. I can also have another look to make sure the current code base is fine. Another different factor seems in the anneal_density_blob_std_config, though I did not nice differences in the results when varying this factor. But you can try this original config anyway.

config.txt

zimingzhong commented 10 months ago

The method does not vary much across seeds, so the results were run with seed 0 for the paper and project page. I have attached the config used for the paper. I was able to reproduce the results from the paper with this config before releasing the code. I can also have another look to make sure the current code base is fine. Another different factor seems in the anneal_density_blob_std_config, though I did not nice differences in the results when varying this factor. But you can try this original config anyway.

config.txt

ok Thank you very much!!! I will try again in this config.

sherwinbahmani commented 10 months ago

Generally, I still recommend you to increase the lambda for sds_video for more motion. For our results we mainly focused on quality and sacrificed the motion. There is always a trade off between motion and quality and you can control it with that parameter.