Closed jialiang66 closed 1 year ago
This file has two effects 1) during the training stage, it is used to make image-mask pairs for evaluating the model on the validation set (not the test set) and you can remove this process by removing the parameter --with_test. 2) during the testing stage, it is used to make image-mask pairs for evaluating the model on the test set.
Take the testing stage of Paris dataset as an example, the test set of Paris StreetView contains 100 images, so we first random select 100 mask images from the whole test mask set (12000 masks) as follows:
python
import numpy as np
index = np.random.choice(12000, 100) # for paris
np.save('index_paris.npy', index)
Then, in the dataset_test.py file, we load the index_paris.npy and select masks from the whole mask dataset via:
self.mask_selected = np.load(test_mask_index)
if not self.training:
self.mask_data = self.mask_data[self.mask_selected]
Simply speaking, it is mainly used for making image-mask pairs, especially for testing stages as you need hold the image-mask pairs for different methods.
For evaluating the model during the training stage, you need first separate some images as the validation set, then after each epochs, we valid our model on the validation set. This process can be removed via removing the parameter --with_test and our code will simply save the latest model, but in this version of our code, you still need to provide the index file and our code just ignore this file if --with_test is not input as parameter. I will modify our code in the future.
Hope it may help!
Hello, thank you very much for your answer. I have completed the test process according to your explanation. I have another question. Can I output the result of the semantic generator, that is, semantic prediction, rather than the final repair result in the test phase ?
Our provided code cannot directly visualize the priors as shown in our paper. The process is a little bit complicated.
As mentioned in our paper, we first extract the feature map "feature_recon" in line 160 in networks.py file (feature map "layout" also works). The size of the feature map is (64,64,152). Then we reshape it to (64x64, 152) and use the k-mean algorithm in opencv for clustering.
The used API in opencv is:
cv2.kmeans(
InputArray data,
int K,
InputOutputArray bestLabels,
TermCriteria criteria,
int attempts,
int flags,
OutputArray centers = noArray()
)
The code may look like:
# 定义停止条件
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
# kmeans计算
ret,label,center=cv2.kmeans(data, 3, None, criteria, 2, cv2.KMEANS_RANDOM_CENTERS)
You need to define the number of the K, the number of attempts, and we use the output "label" and draw colors on it withcv2.applyColorMap
command. Finally, you can obtain the images shown in our paper.
How did you do it? Is there another program to generate the color semantic map, or do you directly use cv2.applycolormap to convert the repair image results??
Yes, I write extra code for drawing the image results and the main code is provided as above.
Simply speaking, for the images you want to show, you need first exact feature map "feature_recon" in line 160 in networks.py, and use this feature map as the input "data" in:
# 定义停止条件
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
# kmeans计算
ret,label,center=cv2.kmeans(data, 3, None, criteria, 2, cv2.KMEANS_RANDOM_CENTERS)
Finally, you need to use the output "label" and draw colors on it with cv2.applyColorMap command.
You cannot directly convert the repair image to obtain the color semantic map.
It is a little bit complicated since you need to define the number of the K, the number of attempts in cv2.kmeans command. I may provide the code for it in the future.
What exactly is added to this place?? I don't quite understand
Hello, thank you very much for your answer. I have completed the test process according to your explanation. I have another question. Can I output the result of the semantic generator, that is, semantic prediction, rather than the final repair result in the test phase ?
I have provided the function ``'Kmeans_map(x)' to draw the visualization results in the main.py of SPN.
Thanks for your interests.
此文件有两个效果:1) 在训练阶段,它用于创建图像掩码对,以便在验证集(而不是测试集)上评估模型,您可以通过删除参数 --with_test 来删除此过程。2) 在测试阶段,用于制作图像-掩码对,以便在测试集上评估模型。
以 Paris 数据集的测试阶段为例,Paris StreetView 的测试集包含 100 张图片,因此我们首先从整个测试蒙版集(12000 个蒙版)中随机选取 100 张蒙版图片,如下所示:
python import numpy as np index = np.random.choice(12000, 100) # for paris np.save('index_paris.npy', index)
然后,在 dataset_test.py 文件中,我们加载 index_paris.npy 并通过以下方式从整个蒙版数据集中选择蒙版:
self.mask_selected = np.load(test_mask_index) if not self.training: self.mask_data = self.mask_data[self.mask_selected]
简单来说,它主要用于制作图像-遮罩对,特别是用于测试阶段,因为你需要为不同的方法保留图像-遮罩对。
为了在训练阶段评估模型,您需要首先将一些图像分离为验证集,然后在每个 epoch 之后,我们在验证集上验证我们的模型。可以通过删除参数 --with_test 来删除此过程,我们的代码将简单地保存最新的模型,但在这个版本的代码中,您仍然需要提供索引文件,如果 --with_test 未作为参数输入,我们的代码将忽略此文件。我将在未来修改我们的代码。
希望它可能会有所帮助!
您好我在训练过程中移除了 ---with_test 以及 验证部分 --test_mask_flist your/flist/of/masks --test_mask_index your/npy/file/to/form/img-mask/pairs 为什么还是会报出这种问题
What exactly is added to this place?? I don't quite understand