WayScience / phenotypic_profiling

Machine learning for predicting 15 single-cell phenotypes from cell morphology profiles
Creative Commons Attribution 4.0 International
2 stars 3 forks source link

Refactor validate module #33

Closed roshankern closed 1 year ago

roshankern commented 1 year ago

This PR is ready for review!

In this PR, the validate module is refactored. Now, the Cell Health classification profiles (phenotypic class predictions averaged across perturbation) are derived in cell-health-data and simply loaded in to this repo. Correlations between these profiles and Cell Health labels are derived for all model types, feature types, across all cell lines, by each cell line, and for pearson and ccc correlation methods.

These correlations are also briefly viewed in this new version of the validate module.

There are about 475 lines to review, sorry for the longer PR 😿

review-notebook-app[bot] commented 1 year ago

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

roshankern commented 1 year ago

Hmmmm, I am really struggling to interpret the cell health data classification profile correlations, both:

1) Understanding why the shuffled baseline models have such high correlations 2) Judging the model's effectiveness in being applied to the cell health dataset (from the correlations)

For 1), I figured I would look at the raw classification numbers (at https://github.com/roshankern/phenotypic_profiling_model/blob/add-classifications-preview/5.validate_model/preview_classifications.ipynb), but I am also not sure what insight these can give. The main difference I noticed is the final models seem to give a much higher probability for interphase. This seems reasonable as most of the nuclei I visually checked in Cell Health Data (from IDR Stream previewer) looked like interphase to me. Do you have any thoughts on how to aproach 1) above?

For 2), it is clear that some correlations we would expect are there (ex apoptosis classification profile and cc_percent_dead). But, these correlations can vary drastically across cell line and model type which makes it difficult for me to say with confidence that these correlations show the model's ability to get useful classification info from cell health data. Also, in some cases it seems that the correlations can be in the opposite of expected (like a negative correlation for the apoptosis classification profile and cc_percent_dead example). I'm thinking it might be worth coming up with a way to judge the correlations en masse, but my idea may be a bit out of the scope of the project. This is what I am imagining we could do:

1) Create an "expected correlations matrix". Here we could manually annotate correlations we would expect to see and which direction we would expect to see them in (although I am not sure it would be viable to include magnitude, just direction). Some example correlations we could annotate:

I think this idea may have large scope creep and be unnecessary for our purposes with the model, but I am not sure how else to holistically review the correlation performance across cell lines and models. What do you think? Is there a better way to answer 2)?

roshankern commented 1 year ago

A preview_CH_correlation_differences.ipynb notebook has been added in https://github.com/WayScience/phenotypic_profiling_model/pull/33/commits/c18edd2ad2eba99673f433188bdc11ead08f6517 that makes the differences between the final and shuffled_baseline models more clear.

The correlation differences for all, pearson, CP_and_DP seem to be as we would expect if the final model is performing better than the shuffled baseline model. This is supported by:

For now we will not validation_score idea mentioned above in https://github.com/WayScience/phenotypic_profiling_model/pull/33#issuecomment-1614043372.