Closed Adarsh321123 closed 2 weeks ago
It's probably because the model checkpoint you're evaluating was trained on a different version of the dataset. The train/val/test split differs for each version, so you're essentially testing on the training data.
I am using the same model checkpoint from HuggingFace for both datasets, so shouldn't the performance still be the same assuming that both datasets are on commit 29dcec074de168ac2bf835a77ef68bbe069194c5
?
Do you know which mathlib commit was used to generate the dataset you downloaded from zenodo? You can find it out in metadata.json
.
So it seems that v9 was trained on commit fe4454af900584467d21f4fd4fe951d29d9332a7
. Crucially, I tried generating this dataset on that commit and reproduced the same bug mentioned in the original post. Can you please let me know if you can reproduce the same bug?
So you generated a new dataset from fe4454af900584467d21f4fd4fe951d29d9332a7
, evaluated the retriever model and got a Recall@10 of 36 or 70?
What model checkpoint are you using?
So you generated a new dataset from fe4454af900584467d21f4fd4fe951d29d9332a7 , evaluated the retriever model and got a Recall@10 of 36 or 70?
- However, when I evaluated on the downloaded v9 dataset, I got a Recall@10 of 36, as expected.
I am using the retriever trained on the random
split that used to be at https://huggingface.co/kaiyuy/leandojo-pl-ckpts.,
That's expected. You should get ~36% only if using exactly the same dataset used for training the model.
I agree. However, I don't understand why I get ~70% when using a generated dataset from the same commit (fe4454af900584467d21f4fd4fe951d29d9332a7
) used to train the model. Shouldn't I get ~36%? The generated dataset should be identical to the v9 dataset, right?
Any thoughts @yangky11? Thank you so much for your help so far!
I don't think it's necessarily going to be the same. If you run the current benchmark generation code twice, does it give you exactly the same dataset?
Description
I believe the
scripts/generate-benchmark-lean4.ipynb
is buggy. I evaluated the performance of the ReProver premise retriever on a dataset I generated from Mathlib4. I should get a recall of around 34.7 according to the paper. When I generated the dataset on the commit29dcec074de168ac2bf835a77ef68bbe069194c5
, my recall was absurdly high at around 70, consistent with other commits I tried with this generation code. However, the downloaded dataset v9 at https://zenodo.org/records/10929138 had a recall of 36, as expected. I am unsure what commit version v9 was on, but regardless I believe that this recall disparity suggests buggy code.Detailed Steps to Reproduce the Behavior
scripts/generate-benchmark-lean4.ipynb
on commit29dcec074de168ac2bf835a77ef68bbe069194c5
to reproduce LeanDojo Benchmark 4 version v9.Logs
Downloaded dataset:
Average R@1 = 13.057546943693088 %, R@10 = 35.968246863464174 %, MRR = 0.31899176730322715
Generated dataset:
Average R@1 = 28.444059618579143 %, R@10 = 69.65759521602048 %, MRR = 0.5975602059714626
Also, there are some differences between the two datasets. With this code,
I get this final output
Platform Information