Closed M0hammadL closed 2 years ago
@M0hammadL we don't need that f(0) is 0 right? so we don't need to subtract c0 here? https://github.com/facebookresearch/CPA/blob/382ff641c588820a453d801e5d0e5bb56642f282/compert/model.py#L135
no I don't think so that was to enforce control cells in CPA to have zero response but you can leave this out
CPA/model.py at 382ff641c588820a453d801e5d0e5bb56642f282 · facebookresearch/CPA The Compositional Perturbation Autoencoder (CPA) is a deep generative framework to learn the effects of perturbations at the single-cell level. CPA performs OOD predictions of unseen combinations of d...
here is the the applying sigmoid, log sigmoid on the on the value and multiply by the embedding : https://github.com/facebookresearch/CPA/blob/382ff641c588820a453d801e5d0e5bb56642f282/compert/model.py#L109
and in the case of mlp (mulltiple mlp layers for complicated functions) :
https://github.com/facebookresearch/CPA/blob/382ff641c588820a453d801e5d0e5bb56642f282/compert/model.py#L213.
I used the torch.nn.ModuleList() list ant it impelemts seperate mlp for each covariate
here you can set the doser_type in our case would be cont_cov_encoder and it could be mlp , sigmoid, logsimoid or linear and the linear is doing nothing and multiply it by embedding
simply copy the code and cite it like here https://github.com/theislab/scarches/blob/63a7c2b35a01e55fe7e1dd871add459a86cd27fb/scarches/models/base/_base.py#L16
reimplementing these things is a pain, I worte them anyway, just make sure to cite it and