Thank you for sharing your cool work!
I have a question about the learning rate of your null-text inversion. According to a notebook, the learning rate is set blow. However, in your paper, the learning rate is set to 0.01.
optimizer = Adam([uncond_embeddings], lr=1e-2 * (1. - i / 100.))
where $i$ represents an index of a for-loop for i in range(NUM_DDIM_STEPS):.
If we set NUM_DDIM_STEPS over 101, the learning rate gets negative.
My question is that can we modify lr=1e-2 instead of 1e-2 * (1. - i / 100.)?
HI, @amirhertz !
Thank you for sharing your cool work! I have a question about the learning rate of your null-text inversion. According to a notebook, the learning rate is set blow. However, in your paper, the learning rate is set to 0.01.
where $i$ represents an index of a for-loop
for i in range(NUM_DDIM_STEPS):
. If we set NUM_DDIM_STEPS over 101, the learning rate gets negative.My question is that can we modify
lr=1e-2
instead of1e-2 * (1. - i / 100.)
?