Open adghin opened 5 months ago
The mask can only be created if the model learns a pixel threshold (e.g. above which pixel-anomaly-value can a pixel be considered anomalous). This pixel threshold is learned via anomalous samples after training. So you will get no anomaly prediction mask without ground truth masks. But you can try to use synthetically generated anomalies if you can't create segmentation ground truths.
The one-coloured green heatmap however seems to be a bug in either your setup or the model. Please share some more infos on your training and inferencing process.
The one-coloured green heatmap however seems to be a bug in either your setup or the model. Please share some more infos on your training and inferencing process.
Hi @alexriedel1 , here is the code for the datamodule creation and for the testing.
Datamodule creation:
test_datamodule = Folder( name="my_project", root=dataset_root, normal_dir="train_normal", abnormal_dir="test_abnormal", normal_test_dir="test_normal", task=TaskType.CLASSIFICATION, image_size=(240,240) )
Running tests in zero-shot/few-shot configuration:
model = WinClip() ### or WinClip(k_shot=..., few_shot_source=...) if in few-shot configuration
engine = Engine()
engine.test(test_datamodule, model=model)
So the model runs validation in order to collect normalization and thresholding and then tests on the test dataloader, but heatmaps are empty. Am I missing something?
Whats the console out of the script? Can you show some of the output images?
@alexriedel1 here is the console output (executed a 5-shot test):
And an example of a bad image:
In all the other images the heatmaps does not change, they are all equally empty.
@adghin This happened to me too, have you solved it now?
I'm not sure if this is a bug or something I'm missing, but with WinCLIP I can't get pixel metrics if the
TaskType
is set toCLASSIFICATION
. For example, say we have a custom dataset with normal images for training and testing (although we don't need the training ones except if we use them for the few-shot configuration) and abnormal images for testing; suppose we don't have ground truth segmentation masks, so the only available task isTaskType.CLASSIFICATION
. Then I only get image-wise metrics even if theEngine
parameterspixel_metrics
andtask=TaskType.SEGMENTATION
are set. Moreover, shouldn't we get at least the heatmap and predicted mask of test images? For me the predicted mask is empty and the heatmap is one-coloured with all green or other purple (like I've seen in other issues regarding strange heatmaps).Any idea what this could be about?