luyug / COIL

NAACL2021 - COIL Contextualized Lexical Retriever
Apache License 2.0
148 stars 28 forks source link

How to use GPU to retrieve? #4

Open jingtaozhan opened 3 years ago

jingtaozhan commented 3 years ago

Thank you for sharing the codes. COIL achieves very impressive retrieval performance. I wonder how to use GPU for retrieval.

luyug commented 3 years ago

The current public retriever implementation uses pytorch API calls, so technically it will take as little as adding a few .cuda() calls to make it run on GPU. Optimizing it may take some efforts. I can make a patch but that could take some time as I am currently having quite a few things on my plate..

jingtaozhan commented 3 years ago

Thanks. I can implement it myself by just adding a few .cuda() calls. But can I achieve the GPU latency reported in the paper in this way?

luyug commented 3 years ago

As I said, optimizing it could take some effort. Some considerations include keeping memory aligned and contiguous. GPU topk efficiency is also tricky. It is also likely to be hardware dependent.

jingtaozhan commented 3 years ago

I see. The original experimental implementation includes many optimization tricks. I will try simply adding the .cuda() calls and look forward to the your optimized GPU retrieval codes. Thank you!