Closed MxMstrmn closed 2 years ago
I've started a test run by setting experiments_per_job: 2
and starting 2 experiments (one finetuned, one not) to check if everything works as advertised, and it did.
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
@siboehm, I added two notebooks for convenient comparison between pretrained and non-pretrained models on sciplex. The model hashes I get from analyze_sciplex_finetune_num_genes.ipynb
, see bb8f464.
I am a bit confused regarding the results in CCPA_prediction_analysis.ipynb.ipynb
since they do not match the evaluation we run during the training. However, these are the results we are looking for and match the observations I shared with you on another split yesterday.
If you have time, it would be great if you could check out this PR and
train.py
- here the results seem not to match with what we visualized beforesplit_ood_finetuning
, for append_ae_layer: true
at least) Other than #96 I think this looks good to me.
@siboehm, this here got quite large but contains code that I used for the experiments + utils that we need in the notebooks for the figures. Would be great if you skim quickly through it.
I will provide another PR with the figure notebooks, once everything is in main, I would suggest to check that experiments run through + figure notebooks work. Then we can publish a streamlined version of chemCPA (e.g. it does not have to contain all our analysis notebooks but maybe script to run it). What do you think?
Ref #76.
This is WIP.
I added the best performing models (timestamp: noon Jan 24) for
vanilla
rdkit
grover_base
MPNN
jtvae
seq2seq
@siboehm, do you think we should sweep over a few different seeds
model.additional_params.seed
to get a feeling for the model performances?