cqylunlun / GLASS

[ECCV 2024] Official Implementation and Dataset Release for <A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization>
MIT License
156 stars 19 forks source link

about run run-wfdd.sh #3

Closed Kingxudong closed 3 months ago

Kingxudong commented 3 months ago

I ran your code, the data was downloaded, but the following error occurred:

0%| | 0/640 [00:59<?, ?epoch/s] Traceback (most recent call last): File "F:\SICC\GLASS-main\GLASS-main\main.py", line 351, in main() File "D:\annaconda\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "D:\annaconda\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "D:\annaconda\lib\site-packages\click\core.py", line 1720, in invoke return _process_result(rv) File "D:\annaconda\lib\site-packages\click\core.py", line 1657, in _process_result value = ctx.invoke(self._result_callback, value, ctx.params) File "D:\annaconda\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, **kwargs) File "F:\SICC\GLASS-main\GLASS-main\main.py", line 299, in run flag = GLASS.trainer(dataloaders["training"], dataloaders["testing"], dataset_name) File "F:\SICC\GLASS-main\GLASS-main\glass.py", line 255, in trainer for i, data in enumerate(training_data): File "D:\annaconda\lib\site-packages\torch\utils\data\dataloader.py", line 631, in next data = self._next_data() File "D:\annaconda\lib\site-packages\torch\utils\data\dataloader.py", line 1346, in _next_data return self._process_data(data) File "D:\annaconda\lib\site-packages\torch\utils\data\dataloader.py", line 1372, in _process_data data.reraise() File "D:\annaconda\lib\site-packages\torch_utils.py", line 722, in reraise raise exception FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "D:\annaconda\lib\site-packages\torch\utils\data_utils\worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "D:\annaconda\lib\site-packages\torch\utils\data_utils\fetch.py", line 51, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\annaconda\lib\site-packages\torch\utils\data_utils\fetch.py", line 51, in data = [self.dataset[idx] for idx in possibly_batched_index] File "F:\SICC\GLASS-main\GLASS-main\datasets\mvtec.py", line 192, in getitem mask_fg = PIL.Image.open(fgmask_path) File "D:\annaconda\lib\site-packages\PIL\Image.py", line 3236, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'F:/SICC/GLASS-main/GLASS-main/datasets/WFDD\fg_mask/grey_cloth/129.png'

cqylunlun commented 3 months ago

According to the error message, you need to download the Foreground Mask (Download link), as described in the 4th section of Dataset Release. After the download is complete, please move the ./All_fg_mask/WFDD/fg_mask directory from the downloaded folder to your designated WFDD path, resulting in F:/SICC/GLASS-main/GLASS-main/datasets/WFDD/fg_mask.

If you prefer not to use the fg_mask, you can simply set the argument --fg to ’0‘ in the run-wfdd.sh script.

Kingxudong commented 3 months ago

"Thank you very much for your reply. Through training, I have seen excellent metric results, which is outstanding work. However, when performing individual tests, the results are not what I expected. Could you please share the test code? Or how should I configure it?"?

cqylunlun commented 3 months ago

Thank you for your recognition. For individual tests, you can change the argument --test to 'test' in run-wfdd.sh. Normally, the test results should be consistent with the training results. If you encounter any test issues, please check and debug method 'tester' in glass.py, which contains the test code.

Kingxudong commented 3 months ago

Thank you very much. I have saved norm_segmentations in the _evaluate function in glass.py, but it seems that the results are not correct.

def _evaluate(self, images, scores, segmentations, labels_gt, masks_gt, name, path='training'): scores = np.squeeze(np.array(scores)) img_min_scores = min(scores) img_max_scores = max(scores) norm_scores = (scores - img_min_scores) / (img_max_scores - img_min_scores + 1e-10)

    image_scores = metrics.compute_imagewise_retrieval_metrics(norm_scores, labels_gt, path)
    image_auroc = image_scores["auroc"]
    image_ap = image_scores["ap"]

    if len(masks_gt) > 0:

        segmentations = np.array(segmentations)
        min_scores = np.min(segmentations)
        max_scores = np.max(segmentations)
        norm_segmentations = (segmentations - min_scores) / (max_scores - min_scores + 1e-10)

        for i, seg in enumerate(norm_segmentations):

            img_up = cv2.resize(seg, (256, 256))

            output_path = './results/segmentations/{}.png'.format(i + 1)
            cv2.imwrite(output_path, (img_up * 255).astype(np.uint8))  # Scale back to [0, 255] for saving
        pixel_scores = metrics.compute_pixelwise_retrieval_metrics(norm_segmentations, masks_gt, path)
        pixel_auroc = pixel_scores["auroc"]
        pixel_ap = pixel_scores["ap"]
        if path == 'eval':
            try:
                pixel_pro = metrics.compute_pro(np.squeeze(np.array(masks_gt)), norm_segmentations)

            except:
                pixel_pro = 0.
        else:
            pixel_pro = 0.

    else:
        pixel_auroc = -1.
        pixel_ap = -1.
        pixel_pro = -1.
        return image_auroc, image_ap, pixel_auroc, pixel_ap, pixel_pro

    defects = np.array(images)
    targets = np.array(masks_gt)
    for i in range(len(defects)):
        defect = utils.torch_format_2_numpy_img(defects[i])
        target = utils.torch_format_2_numpy_img(targets[i])

        mask = cv2.cvtColor(cv2.resize(norm_segmentations[i], (defect.shape[1], defect.shape[0])),
                            cv2.COLOR_GRAY2BGR)
        mask = (mask * 255).astype('uint8')

        mask = cv2.applyColorMap(mask, cv2.COLORMAP_JET)

        img_up = np.hstack([defect, target, mask])
        img_up = cv2.resize(img_up, (256 * 3, 256))
        full_path = './results/' + path + '/' + name + '/'
        utils.del_remake_dir(full_path, del_flag=False)
        cv2.imwrite(full_path + str(i + 1).zfill(3) + '.png', img_up)

    return image_auroc, image_ap, pixel_auroc, pixel_ap, pixel_pro
cqylunlun commented 3 months ago

norm_segmentations is used to calculate metrics and output segmentation heatmaps. The best metrics in both training and testing are obtained using the same model through method _evaluate. Theoretically, the metric and heatmap results should be the same. Could you explain your issue again in more detail?

Kingxudong commented 3 months ago

I need to save the binary images of the test results. I tried saving segmentations, but it’s not the segmentation result. How should I save the binary images after segmentation?

cqylunlun commented 3 months ago

After the test is completed, the results of the original image/ground truth/heatmap have been saved in ./results/eval/wfdd_*. If you need binary segmentation maps instead of heatmap comparison images, you can modify lines L530~L543 of glass.py, specifically the conversion code from norm_segmentations[i] to mask in lines L534~L537.