boschresearch / ISSA

Official implementation of "Intra-Source Style Augmentation for Improved Domain Generalization" (WACV 2023 & IJCV)
GNU Affero General Public License v3.0
34 stars 4 forks source link

模型推理 #1

Closed xiaohui0225 closed 1 year ago

xiaohui0225 commented 1 year ago

请问,可以直接用weights下的权重直接推理吗?推理的命令是什么呢? config文件里有很多个文件路径,分别应该填写什么地址呢? 十分感谢

YumengLi007 commented 1 year ago

Hi @xiaohui0225 ,

thanks for your interest in our work. The pretrained model could not be released unfortunately. But we provide the training scripts train_encoder.py and the configs/mne_training.yml configuration file. The path in the config file represents:

Please check how-to.pdf for more training details. For inference, it's relatively simple, you may refer to the logging code here.

Please feel free to reach out if you have any further specific questions!

xiaohui0225 commented 1 year ago

十分感谢您的回答,我是一个小白,还有以下疑问想请教一下您: 1、daka_fake和pkl_dir 分别是什么呢?您的解释我没有搞明白,这两个应该怎么获取呢?或者说是怎么生成的呢? 2、ISSA的训练指令是什么呢?如果训练好了,推理的指令又是什么呢? 3、在您的代码里有个ISSA/tree/main/training/lpips/weights /v0.0/,里面有三个pth文件,请问这三个权重有什么用呢?

YumengLi007 commented 1 year ago
  1. This repo only provides training code for the Masked Noise Encoder. For the GAN generator training, please refer to stylegan3. Given the pretrained GAN generator, we generate some fake images, e.g., 50K images, depending on the size of your dataset. You can sample Z to generate images, correspondingly we can also have the style vector w, which is the output of the mapping network of the generator. We store both image and its corresponding style w vector. data_fake refers to the path where you store these.
    pkl_dir is the path where you store the pretrained GAN generator, where for example you could train using stylegan3 or use their provided checkpoints depending on dataset you would like to use.

  2. you could simply run python train_encoder.py and configure the path and parameters in configs/mne_training.yml.
    For reference, we don't have a dedicated script in this repo. But you could check the logging code here, which show how the Encoder & Generator are used for image generation.

  3. These weights are for calculating the LPIPS loss

xiaohui0225 commented 1 year ago

Thanks. According to what you said, the train_encoder.py code is used to train the model, so how to infer after training? What are the running instructions?

YumengLi007 commented 1 year ago
  1. You firstly need to load the generator & encoder, similar to here
  2. Then you can use the encoder and generator like here to generate image. From a high level, you need to use the encoder to generate the style and noise, then pass them to the generator. There is no separate script for inference in this repo. You may need to combine the code snippets mentioned above for inference.
YumengLi007 commented 1 year ago

Closed for now. Please feel free to reopen if there is anything still unclear.