thomas0809 / MolScribe

Robust Molecular Structure Recognition with Image-to-Graph Generation
MIT License
154 stars 33 forks source link

Correct metrics to compare with the paper? #18

Closed rytheranderson closed 10 months ago

rytheranderson commented 10 months ago

When retraining, what are the correct metrics to compare with the paper? After retraining the full model (1m680k) it looks like the best model "post_smiles" metric matches Table 2 closely (with some variations as I used a smaller batch size) and, similarly, "post_tanimoto" metric matches Table 2 in the SI. I want to make sure I have the correct perspective on the retraining process.

If you have the time, a brief description of each metric in:

"tanimoto"
"canon_smiles
"graph"
"chiral"
"post_smiles"
"post_graph"
"post_chiral"
"post_tanimoto"
"graph_smiles"
"graph_graph"
"graph_chiral"
"graph_tanimoto"

would be greatly appreciated. I am familiar with tanimoto similarity, SMILES, etc. But, I am trying to gain a better understanding of how they are implemented here, and how the different score variations ("" vs. "post" vs. "graph") can be interpreted to better understand the model.

Thanks in advance, -Ryther

thomas0809 commented 10 months ago

Hi,

Thanks for your question! Please use "post_smiles" and "post_tanimoto" for evaluation.

"post_smiles" is derived by the model generated SMILES and postprocessing based on the predicted atoms and bonds.

"graph_smiles" is constructed entirely by the predicted atoms and bonds to form the graph.

In our experiments, we observe that "post_smiles" generally works better. Therefore we take it as our model's prediction.

rytheranderson commented 10 months ago

Aha!

"post_smiles" is derived by the model generated SMILES and postprocessing based on the predicted atoms and bonds.

This is the piece I was missing, thanks for the quick and concise explanation.

rytheranderson commented 10 months ago

Closing, as my question is resolved.