zomss / Co-Prompt

3 stars 2 forks source link

Regarding your baseline model implementations #1

Closed alstn7 closed 2 months ago

alstn7 commented 2 months ago

Hello,

first of all, thank you for your awesome work! I really enjoyed reading it.

As I was reading your paper, I got curious about your baseline model implementations. How did you implement the RL-prompt for your work? You mentioned that you guys followed the original RL-prompt paper and adjusted it to IR task by adapting a "reward for the policy network as query generation log-likelihood from the document."

Does this mean that you are creating multiple prompts to create queries for corresponding documents? Or are you using a single optimal prompt for every document-query pairs?

And if it's not too much to ask, could you provide the code implementation of your RL-prompt adaptation to IR task?

Thank you so much!!!!

zomss commented 2 months ago

Thank you for your interest in our work and for your thoughtful questions regarding the implementation of the RL-Prompt baseline.

To answer your question, we utilized the official implementation of RL-Prompt, which you can find here: RL-Prompt GitHub Repository. We used a single optimal prompt for every document-query pair in our experiments.

We appreciate your request for the code implementation of our RL-Prompt adaptation for the IR task. We are planning to release the code, but it may take some time as we need to review and prepare it for public use. We will notify you once it is available.

Thank you again for your interest and understanding.

alstn7 commented 2 months ago

Thank you so much for your quick reply!

That cleared up a lot of things!

Thank you so much again!