Open jmoberreuter opened 5 years ago
For this, you will need to modify the code of the dataset class (CocoDataset for example) to be able to load the layouts only.
The image generation process entirely depends on the label layout, and does not depend on the validation image. Therefore, the output result will not change even if you change the image. The groud-truth RGB image is merely used for comparison.
@jmoberreuter have you solved the issue?
@jmoberreuter @MrWwei Have you resolved the issue? @taesungp Could you please explain a bit more about changes to be made.
I got the solution, the dataloader code is using the val_img path for reference. data_i in dataloader is a dictionary with following keys: dict_keys(['label', 'instance', 'image', 'path']) data_i[path] contains list of name of validation images, but it is not being used in Evaluation code in inference mode.
I modified the test code and it's generating and saving the images.
Please find below very simple version of test.py
import os
import cv2
import numpy as np
from PIL import Image
from util import util
from options.test_options import TestOptions
from models.pix2pix_model import Pix2PixModel
from data.base_dataset import get_params, get_transform
#Loading the trained model
opt = TestOptions().parse()
model = Pix2PixModel(opt)
model.eval()
#Loading Semantic Label
label_path = 'datasets/test_sketch/annotations/validation/test_val_00000002.png'
label = Image.open(label_path)
params = get_params(opt, label.size)
transform_label = get_transform(opt, params, method=Image.NEAREST, normalize=False)
label_tensor = transform_label(label) * 255.0
label_tensor[label_tensor == 255] = opt.label_nc
print("-- Label tensor :", np.shape(label_tensor))
#Creating data_dictionary
data_i = {}
data_i['label'] = label_tensor.unsqueeze(0)
data_i['path'] = [None]
data_i['image'] = [None]
data_i['instance'] = [None]
#Inference code
generated = model(data_i, mode='inference')
for b in range(generated.shape[0]):
generated_image = generated[b]
# generated_image = np.moveaxis(generated_image.cpu().detach().numpy(), 0, -1)
generated_image = util.tensor2im(generated_image)
generated_image_path_ = generated_image_path[:-4]+str(b)+".png"
print('---- generated image ', generated_image_path_, np.shape(generated_image))
cv2.imwrite(generated_image_path_, generated_image)
To run this code,
python test.py --name ade20k_pretrained --dataset_mode ade20k --dataroot datasets/test_sketch/ --gpu_ids -1 --results_dir results/test_sketch
Hi, great job on this paper! One question: if I am not retraining, just trying to generate images from a sketch, the validation images should not be necessary. However, if I delete the images, there is an error. On the other hand, if I change the images for something completely different, the output result doesn't change. Am I missing something or could the requirement to have validation images be dropped? Thanks a bunch!