ZhexinLiang / CLIP-LIT

[ICCV 2023, Oral] Iterative Prompt Learning for Unsupervised Backlit Image Enhancement
https://zhexinliang.github.io/CLIP_LIT_page/
269 stars 23 forks source link

about the init_prompt_pair.pth #4

Closed diadestiny closed 1 year ago

diadestiny commented 1 year ago

Thanks for your work! I want to know how 'init_prompt_pair.pth' is obtained, especially the 'embedding_prompt' key. Is there any relevant source for this pth file?

jiaqixuac commented 1 year ago

Thank you for your nice work!

I also want to know if the training code for the initial prompt has been released.

ZhexinLiang commented 1 year ago

Hi, thanks for your interest.

You can use the following command to train from scratch, that is, training without 'init_prompt_pair.pth' and 'init_enhancement_model.pth'.

python train.py --num_reconstruction_iters 1000 --num_clip_pretrained_iters 8000 --load_pretrain False --load_pretrain_prompt False

Using this line, the first 8000 iterations will learn the initial prompt pair, and the following 1000 iterations will learn the initial enhancement network. And after the initialization, the fine-tuning will start.