Closed lparsons closed 3 years ago
I think 'experiment' is 'study', in terms of current model mapping.
I have some questions and concerns related to this item. Let me start with questions related to this implementation and then I will address my over-arching concern.
I'm somewhat concerned here about code redundancy and complexity. My overall design idea, based on the project docs thus far, is to use a fully-featured search interface to combine and filter data that could be utilized to generate pages such as the one described here (which includes joins). The output would be a table with specifically selected fields to display. And as I had described via slack, the search output could be used to generate tables/columns of data that could be input into analyses, visualizations, or as in this case, a page displaying data associated with a particular study.
That's not to say we couldn't have a separate script to generate a page like this. It's just that it could use the search functionality to collect the data for display, and fill in any other metadata to customize the page. What's more is, functionality that could be utilized in a search results page to allow users to exclude outliers or tweak the contents for sending the data into an analysis or visualization could also be employed here. In the future, it could include selecting rows, rearranging rows/columns... or any other search results interface improvements we later decide to add. But if we develop table/data pages independently, such improvements would require custom edits for every page we develop.
My point is that I suggest we develop the search and search results pages first, and that we re-use those internal mechanisms when developing pages like the one described in this issue.
I threw together a data flow design to conceptualize how I envision a search "method" (separate from web pages) would interact with pages in the web interface. It would execute searches, collect & organize data gathered (via joins), and spit back the data as a json object. It might take some doing to get it to work quickly, but I have done similar stuff with perl & mysql by specifying searchable fields from various tables, limiting joins, indexing database fields, and implementing various caching methods. I think it is viable functionality that will make Tracebase very versatile and the code much simpler. The JSON that's passed can have the search specifics, sorting options, filtering options, result ranges, fields to display/include, the destination page that the data is going to, whether the user wants the option to confirm/filter the data, etc.
Merged
Generate a "consolidated table" for the peaks in a given Experiment. Many of these values should be part of the models (see issues #41, #42, #43). An example table provided by the lab is here: tissueDataProcessed033120_withSoleus. The format of the table should be as follows:
serum_infusate_abundance
- the 'normalized_abundance' for the infusate compound (tracer) in the serum sample from this mouse. If 13C-lactate was infused this is the value for 'normalized_abundance' of serum lactate.normalized_fraction
-normalized_abundance / serum_infusate_abundance
, the lab usually calls this 'normalized labeling'