dynverse / dynmethods

A collection of 50+ trajectory inference methods within a common interface 📥📤
https://dynverse.org
Other
118 stars 26 forks source link

FateID #81

Closed rcannood closed 5 years ago

rcannood commented 6 years ago

Hello @dgrun

This issue is for discussing the wrapper for your trajectory inference method, FateID, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards, @rcannood and @zouter

dgrun commented 6 years ago

Dear Robrecht,

I don't think it is a good idea to include FateID into the trajectory benchmarking, since it is not designed for the inference of complex trajectories. The goal of this method is a probabilistic quantification of fate bias. Although the package offers limited support for trajectory inference by principal curve fitting in a dimensional reduction representation it is not designed for resolving complex technologies. I will work on methods in the future for exploiting fate bias in a more sophisticated way for the purpose of trajectory inference.

A current shortcoming is, that trajectories are derived by this rather simple procedure in a dimensional reduction representation. In the future I will try to include the FateID probabilities as prior information into the StemID analysis. This would then be more appropriate for benchmarking...

I saw you also included STEMNET which has a very similar objective to FateID. Do you test the two method in the same way as all other methods, or are they subject to separate benchmaking?

Thanks!

Best wishes, Dominic

On Fri, Jun 29, 2018 at 3:11 PM, Robrecht Cannoodt <notifications@github.com

wrote:

Hello @dgrun https://github.com/dgrun

This issue is for discussing the wrapper for your trajectory inference method, FateID, which we wrapped for our benchmarking study ( 10.1101/276907 https://doi.org/10.1101/276907). In our dynmethods https://github.com/dynverse/dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container https://github.com/dynverse/dynmethods/tree/master/containers/fateid. The way this container is structured is described in this vignette https://dynverse.github.io/dynwrap/articles/create_ti_method_docker.html .

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

The most convenient way for you to test and adapt the wrapper is to install dyno https://github.com/dynverse/dyno, download and modify these files https://github.com/dynverse/dynmethods/tree/master/containers/fateid, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette https://dynverse.github.io/dynwrap/articles/create_ti_method_docker.html. Once finished, we prefer that you fork the dynmethods repository https://github.com/dynverse/dynmethods, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards, @zouter https://github.com/zouter and @rcannood https://github.com/rcannood

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dynverse/dynmethods/issues/81, or mute the thread https://github.com/notifications/unsubscribe-auth/AKo-9KFsF_tIFAYxpf5kFfi9xr1xlXp4ks5uBiefgaJpZM4U86KF .

rcannood commented 6 years ago

Hello Dominic,

There are actually quite a few methods like this that are already implemented in dynbenchmark, namely FateID, GPfates, GrandPrix, MFA, SCOUP, and STEMNET. It would indeed be interesting to perform a more in-depth analysis between only these six methods. For now, they are being evaluated just like all the other trajectory inference methods.

Kind regards, Robrecht

dgrun commented 6 years ago

Dear Robrecht,

I'm not sure it is a fair comparison to benchmark these methods in the same way like the other methods more suitable for inferring complex lineage trees. The objective of these methods is quite different, at least for STEMNET and FateID...

Anyway, I would like to let you know that I updated FateID on CRAN (version 0.1.3). I implemented an adaptive learning scheme in the fateBias function (parameter adapt=TRUE). If you test the method it would be useful to test in this mode.

Thanks a lot!

Best wishes, Dominic

On Tue, Jul 31, 2018 at 5:22 PM, Robrecht Cannoodt <notifications@github.com

wrote:

Hello Dominic,

There are actually quite a few methods like this that are already implemented in dynbenchmark, namely FateID, GPfates, GrandPrix, MFA, SCOUP, and STEMNET. It would indeed be interesting to perform a more in-depth analysis between only these six methods. For now, they are being evaluated just like all the other trajectory inference methods.

Kind regards, Robrecht

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dynverse/dynmethods/issues/81#issuecomment-409259580, or mute the thread https://github.com/notifications/unsubscribe-auth/AKo-9A4_xg3AUXOfnT-JGOZ5RMAaftyvks5uMHYZgaJpZM4U86KF .

rcannood commented 5 years ago

Hey Dominic,

Sorry, I forgot to follow up on this issue. STEMNET, FateID and others were not removed from our benchmark because we believe they are also a relevant category of methods to compare against.

The large overview figure shows that these methods work well, except in terms of scalability.

I'm closing this issue for now. If you have made changes to RaceID / StemID or FateID for which you believe that they might improve performance in one of the criterion, feel free to contact us again.

Kind regards Robrecht

dgrun commented 5 years ago

Dear Robrecht,

thanks for letting me know. FateID seems to work well indeed.

I do have a question regarding RaceID/StemID. Did you run it with nmode=TRUE in the end?

In case you haven't done yet I would like to ask you to rerun with the RaceID parameter metric="logpearson". With this setting RaceID first performs a log-transformation before the cell-cell transcriptome correlation is computed. If cell type differences are more influenced by many lowly expressed genes instead of few highly expressed genes, this metric gives much better results. Could you let me know if this improves the predictions?

Many thanks in advance!

Best wishes, Dominic

On Tue, Jan 15, 2019 at 6:22 PM Robrecht Cannoodt notifications@github.com wrote:

Hey Dominic,

Sorry, I forgot to follow up on this issue. STEMNET, FateID and others were not removed from our benchmark because we believe they are also a relevant category of methods to compare against.

The large overview figure https://github.com/dynverse/dynbenchmark_results/blob/master/08-summary/results_suppfig.pdf shows that these methods work well, except in terms of scalability.

I'm closing this issue for now. If you have made changes to RaceID / StemID or FateID for which you believe that they might improve performance in one of the criterion, feel free to contact us again.

Kind regards Robrecht

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dynverse/dynmethods/issues/81#issuecomment-454476448, or mute the thread https://github.com/notifications/unsubscribe-auth/AKo-9OxMbq2ZgRymm8ceoscU97isbKZWks5vDg5ggaJpZM4U86KF .