Algolzw / image-restoration-sde

Image Restoration with Mean-Reverting Stochastic Differential Equations, ICML 2023. Winning solution of the NTIRE 2023 Image Shadow Removal Challenge.
https://algolzw.github.io/ir-sde/index.html
MIT License
574 stars 42 forks source link

TypeError: expected str, bytes or os.PathLike object, not bool #83

Open wjkbigface opened 6 months ago

wjkbigface commented 6 months ago

Traceback (most recent call last): File "/root/autodl-fs/image-restoration-sde-main/codes/config/deblurring/train.py", line 319, in main() File "/root/autodl-fs/image-restoration-sde-main/codes/config/deblurring/train.py", line 52, in main opt = option.parse(args.opt, is_train=True) File "/root/autodl-fs/image-restoration-sde-main/codes/config/deblurring/options.py", line 62, in parse opt["path"][key] = osp.expanduser(path) File "/root/miniconda3/envs/irsde/lib/python3.10/posixpath.py", line 231, in expanduser path = os.fspath(path) TypeError: expected str, bytes or os.PathLike object, not bool

Algolzw commented 6 months ago

Hi, it seems a model path problem. Can you provide your config file used in training?

wjkbigface commented 6 months ago

general settings

name: ir-sde use_tb_logger: true model: denoising distortion: deblur gpu_ids: [0]

sde: max_sigma: 10 T: 100 schedule: cosine # linear, cosine eps: 0.005

degradation: # for some synthetic dataset that only have GTs

for denoising

sigma: 25 noise_type: G # Gaussian noise: G

for super-resolution

scale: 4

datasets

datasets: train: name: Train_Dataset mode: LQGT dataroot_GT: /root/autodl-fs/image-restoration-sde-main/datasets/blur/trainH/GT dataroot_LQ: /root/autodl-fs/image-restoration-sde-main/datasets/blur/trainH/LQ

use_shuffle: true
n_workers: 4  # per GPU
batch_size: 4
GT_size: 128
LR_size: 128
use_flip: true
use_rot: true
color: RGB

val: name: Val_Dataset mode: LQGT dataroot_GT: /root/autodl-fs/image-restoration-sde-main/datasets/blur/val/GT dataroot_LQ: /root/autodl-fs/image-restoration-sde-main/datasets/blur/val/LQ

network structures

network_G: which_model_G: ConditionalUNet setting: in_nc: 3 out_nc: 3 nf: 64 depth: 4

path

path: pretrain_model_G: ~ strict_load: true resume_state: true

training settings: learning rate scheme, loss

train: optimizer: Adam # Adam, AdamW, Lion lr_G: !!float 1e-4 lr_scheme: MultiStepLR beta1: 0.9 beta2: 0.99 niter: 700000 warmup_iter: -1 # no warm up lr_steps: [200000, 400000, 600000] lr_gamma: 0.5 eta_min: !!float 1e-7

criterion

is_weighted: False loss_type: l1 weight: 1.0

manual_seed: 0 val_freq: !!float 5e3

logger

logger: print_freq: 100 save_checkpoint_freq: !!float 5e3

Algolzw commented 6 months ago

Hi, if you want to continue the training with the existing checkpoint please add the checkpoint path to resume_state (rather than true). Otherwise, you should set the resume_state to ~ (means none) to enable the training from scratch.

wjkbigface commented 6 months ago

谢谢,我试一下