neherlab / covid19_scenarios

Models of COVID-19 outbreak trajectories and hospital demand
https://covid19-scenarios.org
MIT License
1.36k stars 354 forks source link

❗[IMPORTANT] Import scenarios #11

Closed pinanunes closed 4 years ago

pinanunes commented 4 years ago

I believe you might have that on the roadmap since there is an option to export. It would be great to name and compare scenarios with simple KPI.

rneher commented 4 years ago

Yes, import is on the map. But what is KPI?

pinanunes commented 4 years ago

Sorry. Key Process Indicators or main outcomes of the scenarios that can be the Total/peak figures

ivan-aksamentov commented 4 years ago

Hello @pinanunes ,

This is something currently in the works on the branch feat/compare-rebased-2. As you see, the branch had a number of rebases already, because it's been more than a week, and we aren't quite sure yet how exactly to display the results being uploaded/compared.

For your use-case:

We appreciate any advice.

pinanunes commented 4 years ago

Hello @ivan-aksamentov ,

I would say we have 3 major outcomes we want to compare between scenarios either in absolute or relative numbers to a basic scenario: 1-Number of deaths 2- Number of cases 3- Peak value of critical cases

One option would be to save server-side on the session several scenarios with a "Add to compare" button where you would name the scenario. On a different tab, you would have those values as table format, scenarios as rows, indicators as columns and a graph with fewer series than on main page (cases, deaths), color for the scenario, shape (straight, dashed) for the indicator. Eventually, instead of using a table of numbers, you can use a table with valueboxes (for example in shiny https://rstudio.github.io/shinydashboard/structure.html#valuebox) with colors coded as difference to first scenario or the scenario number. You can have a dropdown box to select the indicator you want to compare on top.

Another (or complementary) approach would be to have this compare tab and add scenarios by importing the json files (and running each one).

Besides scenario comparison, adding an import button on the main page would be useful. Can also be on the 1st dropdown box an option to import that on select would open an overlay window to drag and drop or browse local folders for the the .json file.

Check Imperial College's scenario exploration here: https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf

It depends much on the balance you want to have between presenting much information on one page or making it visually appealing

Count me in for comments suggestions

loehndorf commented 4 years ago

Currently, the Export button results in a JSON which contains all input parameters (along with some hard-coded assumptions). On the other side, it would be nice to get the simulated data from the graphs also as JSON file. This would allow to run multiple scenarios with different input parameters and compare them offline.

ivan-aksamentov commented 4 years ago

@loehndorf Hi Nils, thanks for the input! Along with params .json you should also receive a .csv file (which is actually a .tsv, as we use tab delimiter) with resulting trajectory. Do you not receive it or is this file not what you need?

</>

The download of multiple files unfortunately triggers a notification from Chrome. And if this option is disabled or if you clicked "Block" previously, it will not download. We currently think what will be better in terms of user experience. Maybe we could generate a .zip archive containing multiple files? Would it be more convenient for users?

Feel free to submit a separate issue if you have a moment, I think it is important.

loehndorf commented 4 years ago

Thanks, it works in Firefox!

pinanunes commented 4 years ago

I've managed to download both files, but the zip file would be a good solution! Also might be better to name the file with .txt extension than csv.

gj262 commented 4 years ago

Is the desired feature simply to import previously exported form input and display it? Or are you looking to import and have side by side comparison?

ivan-aksamentov commented 4 years ago

@gj262 Honestly, Gavin, we plan a lot of things. Way too many for the resources available :)

Basically, we want both. We export 2 files: params.json and results.tsv. So we want to import both too.

I was thinking to gather all the persistence-related functionality in a single "window" and it ended up in this comment: https://github.com/neherlab/covid19_scenarios/issues/111#issuecomment-602388260

Sorta like a google-drive account, except not with files, but with param-result pairs: the ones previously ran and saved, the ones in local storage and the ones uploaded. From this "window" user then could import, export, get a sharing link etc. Comparison then can be made by selecting multiple runs, e.g. with checkboxes.

gj262 commented 4 years ago

Thanks for the feedback @ivan-aksamentov. This gives me a better understanding of what might be useful FWIW. Also @pinanunes comment and the PDF in https://github.com/neherlab/covid19_scenarios/issues/11#issuecomment-600076600 gives some clue as to how the comparison might be visualized.

The input params are already in the URL, so you have a form of sharing and the ability to do a side by side comparison of the results with different browser windows. Not ideal but something.

This though does not capture how the results may have changed over model/algorithm changes i.e. with constant params. I suppose this is the intent of exporting results.tsv? Or am I way off there?

ivan-aksamentov commented 4 years ago

@gj262 Gavin, these are all interesting points!

This though does not capture how the results may have changed over model/algorithm changes i.e. with constant params. I suppose this is the intent of exporting results.tsv? Or am I way off there?

For now we ignore algorithm changes. Until we announce some sort of a stable release assume that algorithm is stable (in reality, we might currently have breaking changes twice a day). Export now is mostly for users to be able to open the results in Excel or to continue with their workflow otherwise.

For the future we need to start thinking about a versioning scheme though, if we want to provide any realistic comparison functionality.

whiver commented 4 years ago

@gj262 Honestly, Gavin, we plan a lot of things. Way too many for the resources available :)

Basically, we want both. We export 2 files: params.json and results.tsv. So we want to import both too.

* [ ]  `params.json` will populate the form (and optionally Run the algorithm, when autorun feature is done)

* [ ]  uploading `results.tsv` should allow for comparison

I was thinking to gather all the persistence-related functionality in a single "window" and it ended up in this comment: #111 (comment)

Sorta like a google-drive account, except not with files, but with param-result pairs: the ones previously ran and saved, the ones in local storage and the ones uploaded. From this "window" user then could import, export, get a sharing link etc. Comparison then can be made by selecting multiple runs, e.g. with checkboxes.

+ @rneher for visibility

As mentioned in Spectrum, I started working on the import feature (the WIP PR is #381).

I think I'm gonna start simple with an upload limited to only 1 file, and reuploading another file will replace the previous one. Does that work for you?

I'm also wondering how to display both the simulation results and the user-uploaded ones in the same graph in a clean way. Do you have ideas?

ivan-aksamentov commented 4 years ago

This is still important

whiver commented 4 years ago

This is still important

@rneher @ivan-aksamentov Gentle ping on my PR #381. I now have a working version that you can test but I'm waiting for some feedbacks to see if I'm heading to the right direction. Thanks in advance :)

ivan-aksamentov commented 4 years ago

Resolved in #635