ZhexinLiang / CLIP-LIT

[ICCV 2023, Oral] Iterative Prompt Learning for Unsupervised Backlit Image Enhancement
https://zhexinliang.github.io/CLIP_LIT_page/
269 stars 23 forks source link

About the prompt and model initialization #5

Closed jiaqixuac closed 1 year ago

jiaqixuac commented 1 year ago

Hi, Thank you for your nice work. I want to know if it is possible not using the provided initial prompt and enhancement model but to train from scratch by using

python train.py -b ./train_data/BAID_380/resize_input/ -r ./train_data/DIV2K_384/ \
--load_pretrain False --load_pretrain_prompt False --num_reconstruction_iters 1000 --num_clip_pretrained_iters 8000
ZhexinLiang commented 1 year ago

Hi, thanks for your interest.

Yes, you can use this command to train from scratch. :)

Btw, I updated the train.py. The old version may have some memory issues if you use it to train from scratch. Please download the latest version. Sorry for the inconvenience caused.