Hi, great work on the dynverse package. I'm really impressed by how everything fits together.
I am benchmarking pseudotime methods right now on a multifurcating tree. Often, one method resolves all of the branches in the scaffold, while another method merges some of those branches and does not resolve the topology. When calculating the F1-score of cell assignments, however, these solutions score very similarly.
I stepped through the code that calculates F1, and unlike matched one-to-one classes used by F1 in classifiers, dyneval implements a sort of one-to-many approach to match branches from the predicted and expected topologies. This, in turn, does not as aggressively penalize the score when an entire branch is missing.
What was the rationale behind this approach, and is there a metric that I can use or adjustments I can make that take into account cell assignments and topology jointly?
Hi, great work on the dynverse package. I'm really impressed by how everything fits together.
I am benchmarking pseudotime methods right now on a multifurcating tree. Often, one method resolves all of the branches in the scaffold, while another method merges some of those branches and does not resolve the topology. When calculating the F1-score of cell assignments, however, these solutions score very similarly.
I stepped through the code that calculates F1, and unlike matched one-to-one classes used by F1 in classifiers, dyneval implements a sort of one-to-many approach to match branches from the predicted and expected topologies. This, in turn, does not as aggressively penalize the score when an entire branch is missing.
What was the rationale behind this approach, and is there a metric that I can use or adjustments I can make that take into account cell assignments and topology jointly?
Many thanks, AL