So far the differential test on uncertainty metrics hasn't proved to be really useful. I need to show that this is because of the test and not because of the used uncertainty metric.
values of uncertainty to query cells (not in place)
Which uncertainty metrics:
Effort to use "reference-free" metrics
[x] Label uncertainty (scArches uncertainty)
[x] Cosine distance to predicted expression profiles
[x] Inference_posterior_distance: distance of sampled z positions to inferred (mean) z position (~ uncertainty around the position in the latent space)
So far the differential test on uncertainty metrics hasn't proved to be really useful. I need to show that this is because of the test and not because of the used uncertainty metric.
Make a simplified/refactored version of https://github.com/emdann/query2reference_uncertainty/blob/master/q2r_uncertainty/uncertainty_metrics.py
Input:
vae
or celltypist model or KNN classifier)X_scVI
inobsm
Return:
values of uncertainty to query cells (not in place)
Which uncertainty metrics:
Effort to use "reference-free" metrics