WendongZh / SPL

[IJCAI'21] Code for Context-Aware Image Inpainting with Learned Semantic Priors,
53 stars 4 forks source link

I am sorry to bother you, but I have some problems. #2

Open song201216 opened 3 years ago

song201216 commented 3 years ago

When I am training, this module ‘inplace_abn’ needs to be used.If you can help me solve, I will be very grateful. thanks D:\Anaconda3\envs\torch18\python.exe E:/code/SPL-main/main.py Traceback (most recent call last): File "E:/code/SPL-main/main.py", line 20, in from models_inpaint import InpaintingModel File "E:\code\SPL-main\models_inpaint.py", line 9, in from src.models import create_model File "E:\code\SPL-main\src\models__init.py", line 1, in from .utils import create_model File "E:\code\SPL-main\src\models\utils__init__.py", line 1, in from .factory import create_model File "E:\code\SPL-main\src\models\utils\factory.py", line 5, in from ..tresnet import TResnetM, TResnetL, TResnetXL File "E:\code\SPL-main\src\models\tresnet\init__.py", line 1, in from .tresnet import TResnetM, TResnetL, TResnetXL File "E:\code\SPL-main\src\models\tresnet\tresnet.py", line 8, in from inplace_abn import InPlaceABN ModuleNotFoundError: No module named 'inplace_abn'

WendongZh commented 3 years ago

Thanks for your interest.

Yes, this inplace_abn module is needed when training our model. You need to install this module from here.

In our model, we use the feature maps from a pretrained TResnet to provide the semantic supervision. So we need to first establish a TResnet model and load the pretrained weights. The inplace_abn module is used for training TResnet, so it is also needed when training our model. But if you just want to evaluate our model, you can comment this line "from src.models import create_model".

It seems that I have made some ambiguous expressions in Readme. Sorry for that.

song201216 commented 3 years ago

I'm very glad that you can reply me. I didn't read it carefully before. Please forgive me for my carelessness.I evaluated your model according to your instructions, and the process was smooth, but no test results were produced.I hope you can give me some indication.Thank you!

WendongZh commented 3 years ago

If you use the command I provided in Readme:

CUDA_VISIBLE_DEVICES=0 python eval_final.py --bs 50 --gpus 1 --dataset paris \ --img_flist your/test/image/flist/ --mask_flist your/flist/of/masks --mask_index your/npy/file/to/form/img-mask/pairs \ --model checkpoints/x_launcherRN_bs_4_epoch_best.pt --save --save_path ./test_results

The generated images will be save in the "test_results" folder under current directory. So can you find this folder?

song201216 commented 3 years ago

Hi, I used this command, but I found that I need to test the image in the Test Results folder. After the test, I will get cropped 256 * 256 test photos in this folder.

song201216 commented 3 years ago

I have understood the previous question. I want to know when you plan to release a single card training code? Thanks!

jialiang66 commented 3 years ago

--mask_index your/npy/file/to/form/img-mask/pairs

What exactly is added to this place?? I don't quite understand。How did you solve it?

WendongZh commented 3 years ago

I have understood the previous question. I want to know when you plan to release a single card training code? Thanks!

I may release a single card training code in the future, maybe open another new projection.

Anyway, you can try to convert our code into single card training version. The modification includes the optimization, the dataloader definition, and the initialization of the multi-card training.