facebookresearch / GPU-DPF

GPU-based Distributed Point Functions (DPF) and 2-server private information retrieval (PIR).
Apache License 2.0
51 stars 5 forks source link

How to test the inference speed of exported onnx models? #1

Closed TheSunWillRise closed 1 year ago

TheSunWillRise commented 1 year ago

Hello, thanks for your great work. I have a question about the codes. If I have trained a model and exported an onnx model, for example, I got the onnx model by running paper/experimental/batch_pir/modules/language_model/train_model.sh. How can I test the inference speed of this onnx model using dpf?

agnusmaximus commented 1 year ago

Hi, we don't expose any of the model code for the project, only the DPF-PIR code.