google-research / tapas

End-to-end neural table-text understanding models.
Apache License 2.0
1.15k stars 217 forks source link

Inference time #156

Open sbhttchryy opened 2 years ago

sbhttchryy commented 2 years ago

Hello, are there any statistics regarding the inference time for the experiments in the 'Open Domain Question Answering over Tales via Dense Retrieval'?

sbhttchryy commented 2 years ago

@eisenjulian

eisenjulian commented 2 years ago

Hi there, we don't have such statistics at the moment. For the paper's experiments, we used a brute-force nearest neighbour search, which in a production application can be dramatically improved by using an index such as FAISS or https://cloud.google.com/vertex-ai/docs/matching-engine

Let us know if you run some benchmarks, I expect the bottleneck would be in extracting the table representation, which should vary depending on the model and architecture (TPU vs GPU vs CPU).