The official repository for "Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation", Shengyao Zhuang, Houxing Ren, Linjun Shou, Jian Pei, Ming Gong, Guido Zuccon and Daxin Jiang.
You can simply change trainer.train() to trainer.predict(valid_dataset) for the DSI task in run.py
and you can write the ranking list into a file in def compute_metrics(eval_preds):.
It seems that the code will output metrics. I wonder if there is a way to output the actual predictions per query?