Closed Vvdinosaur closed 2 years ago
@Vvdinosaur can you also share your config file too?
@samet-akcay Hi,thanks for your reply,this is my config file: I think the error has nothing to do with config file. when I run lightning_inference.py, the batch_size in "DataLoader(dataset)" defaults to 1, then i got an error: File "/home/ycao/.conda/envs/d2/lib/python3.9/site-packages/anomalib/post_processing/post_process.py", line 131, in superimpose_anomaly_map superimposed_map = cv2.addWeighted(anomaly_map, alpha, image, (1 - alpha), gamma) cv2.error: OpenCV(4.5.5) /io/opencv/modules/core/src/arithm.cpp:647: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'arithm_op'
so I traced back to anomaly_map.py, after I modified " squeeze()" to " squeeze(1)", the error has gone !
this is my config file:
dataset:
name: mvtec #options: [mvtec, btech, folder]
format: mvtec
path: /media/ycao/data0/anomalib/mvtec
category: toothbrush
task: segmentation
image_size: 256
train_batch_size: 16
test_batch_size: 16
inference_batch_size: 16
fiber_batch_size: 64
num_workers: 0
transform_config:
train: null
val: null
create_validation_set: false
model:
name: cflow
backbone: resnet18
pre_trained: true
layers:
- layer2
- layer3
- layer4
decoder: freia-cflow
condition_vector: 128
coupling_blocks: 8
clamp_alpha: 1.9
soft_permutation: false
lr: 0.0001
early_stopping:
patience: 2
metric: pixel_AUROC
mode: max
normalization_method: min_max # options: [null, min_max, cdf]
metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC
threshold:
image_default: 0
pixel_default: 0
adaptive: true
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: True # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]
...
I think the error has nothing to do with config file.
The reason for asking a config file is to reproduce the issue.
so I traced back to anomaly_map.py, after I modified " squeeze()" to " squeeze(1)", the error has gone !
Great! We'll test this on our end as well.
@samet-akcay and i have another one question about CFlow. for same image, outputs["anomaly_maps"] in test result (run train.py) and outputs["anomaly_maps"] in inference result (run lightning_inference.py) are different......why? i guess....when inference ,cflow model only load weights of encoder(right?),,how about decoder?
I think the error has nothing to do with config file.
The reason for asking a config file is to reproduce the issue.
so I traced back to anomaly_map.py, after I modified " squeeze()" to " squeeze(1)", the error has gone !
Great! We'll test this on our end as well.
@samet-akcay was a solution found for this? i'm encountering the same problem but putting squeeze(1) as suggested above doesn't work for me. Thank you in advance. @Vvdinosaur did you modify anything else to make it work? i have your same exact error message but just by modifying the anomaly_map.py file from squeeze() to squeeze(1) didn't change anything. Thank you both for your help
@JACKYNIKK, @Vvdinosaur is right. The solution proposed above fixes the issue. I've created a PR to fix this in #589.
@samet-akcay I'm sorry to bother again since this fix seems to be definitive and the issue is closed. I saw that the change was just merged and i was waiting for the merge after you proposed the change 3 hours ago to test the inference on my computer again. But i really don't understand why it still gives the same error on my side. i changed anomaly_map.py in cflow with the new version just uploaded. Repeated the training on cflow which works (as it did before). but still (and i had the same problem adding squeeze(1) on the old version) it throws the same error in inference:
("""""""cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'")""""""""")
I cloned the repository as it was on the 31 of august, was there any other change other than the one on anomaly_map.py which could cause this issue?
Because i understand that this is very strange and again i'm sorry to bother you but i also tried to clone the repository again as it is now. Then trained cflow with this config file: config_1.txt
doing :python tools/train.py --config anomalib/models/cflow/config_1.yaml
(which works fine) and then doing inference python tools/inference/lightning_inference.py --config anomalib/models/cflow/config_1.yaml --weights results/cflow/raytec/weights/model.ckpt --input datasets/raytec --output results/cflow/raytec_inference and i still get that error.
Thank you in advance
@JACKYNIKK, no problem at all!
I'll have a look at it to see if I could reproduce it
Just checked the lightning inference, which seems to be working fine on my end.
Have you tried to train CFlow on MVTec dataset? Alternatively, are other models working fine on your system?
The other models' training and inference (with lightningh) on the same dataset all work for me. It is just Cflow which throws the same error now as before changing the anomaly_map.py file some hours ago. Evidently i'm doing something wrong if both @Vvdinosaur 's problem and issue #568 were solved with these changes. I will now try to train Cflow on MVtec as suggested. Thank you for now.
Problem solved. The problem was: i have multiple anomalib installations in different enviroments in this pc. The inference command was calling the anomaly_map.py file in the enviroment i'm working which is not the one i modified (i was confusing the different installations and being new to pycharm and python in general didn't help). Anyway thank you for your patience and time and for this great repository.
It may exists an error in cflow inference: in models/cflow/anomaly_map.py ,line 59 is : test_map.append( F.interpolate( test_mask.unsqueeze(1), size=self.image_size, mode="bilinear", align_corners=True ).squeeze() at train steps, the test_mask shape is [batch_size,32,32] (batch_size>1): 1、after F.interpolate() ,it becomes [batch_size,1,image_size,image_size] 2、after squeeze(),it becomes [batch_size,image_size,image_size] when inference,for one image ,the test_mask shape is [1,32,32]: 1、after F.interpolate() ,it becomes [1,1,image_size,image_size] 2、but after squeeze(),it becomes [image_size,image_size] so, I think the squeeze() should be squeeze(1) , make length of shape be 3 which is [1,image_size,image_size].