Open asnecemnnit opened 5 months ago
Could you share more of the error trace please. I'm not sure if this is Onnx related but this issue might happens if your input is not 4D. I think it should be [B, C, H, W], so even if it's just a single image it should be [1, 3, H, W]. Or it might be related to anomaly maps that should be [1, H, W] not [H, W] if I recall correctly.
@blaz-r This issue can be reproduced by running the code snippet I have provided above. If you look at the code, I am passing a dummy input to the export method in (B, C, H, W) format: dummy_input = torch.randn(1, 3, 256, 256)
Ah okay, I thought that this was the export code for a trained model. Upon closer inspection I now see where the problem is. Patchcore relies on the memory bank that is built during training. In the code above, the model is not trained at all and with this memory bank is empty, leading to issues.
I believe you can export the model directly with your code, but for it to work you'll need to train the model beforehand. Refer to training guide on how to do that.
Thanks for the quick replies! I have a trained "model.pt" as well which after loading looks like following {'model': InferenceModel( (model): PatchcoreModel() ..... }
Can you suggest me the correct way to load it? Because when I export it, I get this error instead:
I am not sure what the issue could be here, but I'd recommend you export the model with anomalib directly:
anomalib export --model Patchcore --export_mode onnx --ckpt_path <PATH_TO_CHECKPOINT> --input_size "[256,256]"
For more help about exporting you can use anomalib CLI like this:
anomalib export -h
I do think we'll need to expand the docs on this one though. @ashwinvaidya17 am I missing something or is there no docs on this currently?
I tried anomalib export -h
after full installation of anomalib, but it couldn't recognize the export command. Moreover I tried to use export method in anomalib.engine, but it fails to load the model checkpoint due to some version conflict.
That is unusual, the anomalib export -h
works fine on my side. I'm not sure how the engine export works directly though, but judging by the docstring it's mostly intended to be used with CLI, although it looks like it could work from code as well.
Describe the bug
I am still facing the issue described in #1331 . However, I am directly using torch.onnx.export on loaded PatchcoreModel model. As a result, self.memory_bank is not being initialized which results in this error inside euclidean_dist.
Dataset
Other (please specify in the text field below)
Model
PatchCore
Steps to reproduce the behavior
Execute following code after anomalib installation:
OS information
OS information:
Expected behavior
Model should be exported in ONNX format.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
1.1.0
Configuration YAML
Logs
Code of Conduct