zyxElsa / InST

Official implementation of the paper “Inversion-Based Style Transfer with Diffusion Models” (CVPR 2023)
Apache License 2.0
509 stars 43 forks source link

training time for "main.py" #26

Open fikry102 opened 1 year ago

fikry102 commented 1 year ago

The paper says "The training process takes about 20 minutes each image on one NVIDIA GeForce RTX3090 with a batch size of 1." I used two RTX3090. However, it already cost three hours and it doesn't seem to stop.

For "v1-finetune.yaml”, which paramater specifys the number of epochs in the training process?

zhangquanwei962 commented 1 year ago

The paper says "The training process takes about 20 minutes each image on one NVIDIA GeForce RTX3090 with a batch size of 1." I used two RTX3090. However, it already cost three hours and it doesn't seem to stop.

For "v1-finetune.yaml”, which paramater specifys the number of epochs in the training process?

As for me, if you dont not Ctrl+C, with the default yaml, it won't be stop

code-gfBai commented 1 year ago

In main.py, change trainer_kwargs["max_steps"] = opt.max_steps to trainer_kwargs["max_steps"] = trainer_opt.max_steps you will can use max_steps in the v1-finetune.yaml file