kjappelbaum / gpt3forchem

Apache License 2.0
2 stars 2 forks source link

how do we tie all threads together? #11

Open kjappelbaum opened 2 years ago

kjappelbaum commented 2 years ago

I feel we're now going a bit all over the place and it will be hard to structure a coherent study. The initial "pitch" notes were structures around "forward classification" and "inverse design". Now we also added regression.

In total we have for the "forward" predictions

For the "inverse" route we currently only have the polymers.

kjappelbaum commented 2 years ago

I feel we do not want to have the best model for any given application - we do not tune enough (and it makes no sense to spend resources on that) - and the key message is probably LLM are so powerful that they can perform competitively with strong baselines on various chemistry and materials science applications without fine-tuning of representations or models.

We can probably show this in a really short paper of Andrew White style where we just

kjappelbaum commented 2 years ago

i.e. one could maybe organize it around one figure illustrating the results for classification, another one for regression, and then one for inverse design.

pschwllr commented 2 years ago

This is probably the most important point we should discuss.

We could initially submit a 4-page paper to:

Both of the workshops required the same NeurIPS style format for the submission.