Open rishabhkabra opened 1 month ago
Yes, inference may take an hour or so depending on your hardware. It is running an optimizaton called "score distillation sampling." You can find details about it in the original paper: https://arxiv.org/abs/2209.14988
I assume launch_inference.sh is meant to run inference on the motorcycle image. But so far it's been going for over 30 mins with no end in sight. I also noticed it calls launch.py in
--train
mode. Is this intended?Here's the log: