txWang / BERMUDA

BERMUDA (Batch Effect ReMoval Using Deep Autoencoders) is a novel transfer-learning-based method for batch correction in scRNA-seq data.
MIT License
27 stars 7 forks source link

help in running BERMUDA #5

Open nnks-dev opened 4 years ago

nnks-dev commented 4 years ago

I don't know in which order to run a lot of code. Also, what part of the code should I change if I want to run my data on BERMUDA? This is a silly question, but I would appreciate it if you answered it.

txWang commented 4 years ago

Hi, You could modify "main_pancreas.py" or "main_pbmc.py" to suit your own dataset given that your data has been pre-processed by Seurat for normalization and cluster identification and MetaNeighbor for cluster similarity. You could also use "R/pre_processing.R" for the pre-processing of Seurat and MetaNeighbor.

nnks-dev commented 4 years ago

Thank you for answering my question. I want to continue asking a question, can I do the same if I want to apply more than three datasets?

txWang commented 4 years ago

Hi, Sure. BERMUDA supports multiple datasets for batch correction. You just need to modify the code accordingly and feed the pre-processed datasets as well as the MetaNeighbor similarity matrix into the algorithm.

nnks-dev commented 4 years ago

Thank you for answering hank you for answering. I'm sorry to ask a question many times, but please tell me how to get the mmd value.

nnks-dev commented 4 years ago

What is the difference between mmd and loss_transfer?

txWang commented 4 years ago

Hi, In short, "loss_transfer" utilized mmd loss between pairs of similar clusters between different batches.