Closed mjack3 closed 2 years ago
Yes i am getting the similar results as in your patchcofre_example notebook
I added CUDA Support I meassured the time in this way
This are the result before and after
As the distances are calculated in CPU, this is not a huge change but it works bit faster
Hi!
Fun that you want to contribute to the project! Cool that you got such a speed improvement, that would be really nice to include in the repo. I have some things I would like you to fix and some questions before I merge:
feature_extraction.py
: Line 112 in feature extraction: device = self.device
seems to me that it doesn't do anything. I guess it should be removed. However I believe line 114 should stay.patchcore.py
: You added two classes NN
& KNN
. Is it necessary to have both and is it necessary to have all those methods? I don't see them being used.patchcore.py
: You changed the default argument for backbone to "wide_resnet50"
. I think it should still be "resnet18"
.patchcore.py
: You added two prints. They should be removed.patchcore.py
: On line 168 you added batch = batch.to(self.device)
. This should be removed. We have chosen to handle the device separately for model and data for clarity.patchcore.py
: You import faiss
. I don't see it being used. Is it necessary and if so for what?utils.py
: You changed values in standard_image_transform
and standard_mask_transform
. Why? If no reason I would prefer them to stay as before.utils.py
: You removed the device in to_batch
. I think this has to do with (5) and I think it should stay as before.Please say if anything I wrote here seems weird or if you have a reason for something I said to change.
I will answer as soon as possible :)
Great! I saw you added something more to the pull request now. If it is possible and you implement different features, please do multiple smaller pull requests instead of one large.
Hello.
For train
For inference
I think that this general idea of changing to_batch
will affect to PADIM implementation, but it is straight forward to change it. If you dont like any idea, please, feel you free to use what you consider from my commit :)
PDT: my last commit just fixes little mistakes and nothing new was implemented
Inference speed highly increased from 6s to 2s for the 5 images used in the notebook. Now also support CUDA and speed is increased to 0,02s using a RTX 3090
Based on your sample images from notebook
I meassured the total time making predicitons
This is the result
Now the performance have been highly increasd by CPU
And now neighbour search support supports GPU