Open rhubal opened 4 months ago
I agree the path count is weird, I was just about to submit a similar issue. If each path that contributes to an inferred path is counted, then should the inferred path itself also be counted?
This issue appears to have been resolved in Test: https://ui.test.transltr.io/main/results?l=NGLY1%20(Human)&i=NCBIGene:55768&t=2&r=0&q=aae090b8-93cb-4d89-bf3a-1e97f35b7953
Closing
I think the original question (as I understand it) in this issue still stands: If each supporting path is counted, should the resulting inferred path also be counted? That seems like double-counting to me. So in the updated screenshot above, I would say either count the total as one path (inferred), or two (supporting paths).
Here's another example from a recent query on test:
Wouldn't it be more reasonable to call that either 7 paths?
@khanspers from the UI Team's perspective the user likely will not have an understanding of the model to the point of making the same distinction re: inferred vs support paths. In this case they would see 9 rows of paths that are counted as 7, which I think would be much more confusing.
Personally I don't think that a user will see this as double counting, and even if some do, changing to count only the support paths or only the one hops would be more confusing than the issue that it solves, because the number listed and the amount of rows would almost always be inconsistent. Additionally we would have to bake in exceptions for results that have inferred paths with no support, or results that have only lookups.
So, based on the return from Translator for Nelfinavir, is it then reasonable for a researcher to write in a paper that there are 6 paths that may decrease the activity of NGLY1? Ultimately, how would the scientific community perceive such a claim, when in fact there are two only inferred paths that may decrease the activity of NGLY1, and putatively each of them in turn are supported by two subgraphs?
Interestingly enough I think your proposed situation proves my point @codewarrior2000: a researcher directly lifting the idea of paths (which are an abstraction created by the UI for visualization and cognitive load purposes) and forcing it into a format appropriate for a research paper doesn't make sense. The knowledge that's shared should be adapted for the format in which it's to be presented.
The same is true with the path counts in that the UI should not be forced to adopt the conventions of the data model when adapting that data for visualization, it should be able to prioritize the needs of the user over the conventions of the model. That's my take anyway, I'm happy to be overruled if there's significant support for counting only inferred paths or only support paths, or some other solution.
I agree with @dnsmith124 that researchers are unlikely to directly use the number of paths from Translator in a publication or similar, and that's not really my concern. However, the way the paths are presented in the UI (as an abstraction) is confusing on its own; I think even without knowing anything about the underlying model. The inferred path and its supporting paths, are represented very clearly (shading, inset box, arrows etc) to indicate that the inferred is a "summary" of the supporting graphs. The mouseover information box in the Inferred section label also explains this. My concern is with the path count appearing inconsistent in the UI, which might affect the users confidence in the tool in general.
I admit that don't represent the typical user, and I would be interested in hearing what SMEs and other actual users have said. Maybe this is a non-issue.
Lastly, I don't understand what's meant by "inferred paths with no support" (does that ever happen? maybe I'm misunderstanding this), or "results that have only lookups" (as far as I can tell lookup paths are counted unambiguously and are not confusing).
@khanspers Totally agree, I think SME & user input is definitely needed here to determine the best path forward! 😄
To address the last bit we to have a lot of paths that come back marked as "inferred" that don't actually have support graphs attached. I believe they're fairly common in BTE results, but they could be coming from other sources as well. Right now the UI displays them as lookups, but we will likely change this in the future.
Anyway I'm really enjoying the conversation here and appreciate your perspective, the counts being confusing as they are now isn't something I would have considered.
…from the UI Team's perspective the user likely will not have an understanding of the model to the point of making the same distinction re: inferred vs support paths.
If that was your point, then how would "…knowledge that's shared should be adapted for the format in which it's to be presented." By your original point, what's to stop a user from publishing a screenshot of the displayed paths and path count? When it comes to what users can and will do, "never say never" is a golden rule.
@codewarrior2000 I think ideally the user would make a statement based on the knowledge that the paths present, not simply screenshot a path as evidence for their statement. But as you say, "never say never" is the right approach!
On the subject of screenshots, here is a "Side-by-side" comparison of path depictions, between Translator and another Translator-like app https://insilicom.com/ called bioKDE that Chris Mungall shared this week:
TRANSLATOR:
BioKDE:
Claim that there are six paths (image 01a) but really four paths (01b) that can be grouped. Have users commented?