Closed x12901 closed 3 years ago
Are you running your model on the GPU? I see the main bulk goes to normal inference. What are the authors reporting?
Inference time for SPADE heavily depends on your internal saved tensors. Normal KNN inference is linear wrt the amount of (training samples).
The PatchCore authors use faiss, but I didn't get around fully connecting it to the pytorch models.
Thanks for your effort! I saw the average inference time is very short in the paper. My time to detect a single picture is 0.9686539s.
feature_maps, z = self(sample)
It takes a lot of time.I try to select a smaller backbone and reduceself.image_size
. It has no obvious improvement. Any comments??? Thanks!https://github.com/rvorias/ind_knn_ad/blob/25498b227c00b689cb2bf9a005ffdf0f2509dd63/indad/utils.py#L20 In addition,It can be improved.