mk-minchul / AdaFace

MIT License
612 stars 117 forks source link

Tensor shaped used for inference #104

Open fawkeswei opened 1 year ago

fawkeswei commented 1 year ago

According to README AdaFace takes bgr_input which is 112x112x3 torch tensor

https://github.com/mk-minchul/AdaFace#general-inference-guideline

However the sample code for inferencing uses torch.randn(2,3,112,112)

https://github.com/mk-minchul/AdaFace/blob/90fb74c291678609a201332646aa27a937bedc9f/inference.py#L30-L31

Is there something I'm missing out on? Thanks in advance.

afm215 commented 1 year ago

The way I understood this demo code is that its purpose is to show how to use a trained model to extract embeddings from tensors. So torch.randn(2,3,112,112) is just here to create a fake image. If you are looking for a way to preprocess images at inference time, the to_input function within inference.py did the trick for me.

fawkeswei commented 1 year ago

The way I understood this demo code is that its purpose is to show how to use a trained model to extract embeddings from tensors. So torch.randn(2,3,112,112) is just here to create a fake image. If you are looking for a way to preprocess images at inference time, the to_input function within inference.py did the trick for me.

I understand it's a fake image, the question is shouldn't the fake image be torch.randn(1,3,112,112) instead of torch.randn(2,3,112,112)?

afm215 commented 1 year ago

It is probably to show that batch inference is working as well:)