Open mfiorina opened 1 year ago
P.S. I added test data in the test_data
folder. It's from the MSFR course, entitled LWH_FUP2_raw_data.csv
. You can upload it to the platform in the "Upload" tab.
Reviewing the prototype based on requirements documents here
I will continue posting feedback in sections.
Checked means it is functionally working, unchecked means it is missing or not currently functional.
[x] The user will be greeted with a landing page that requests that they upload survey data for use as an input for the HFCs.
I think the information on the landing page could be better communicated graphically.
[ ] They will have an option to use test data pre-loaded onto the platform.
[ ] The page will have a link to a guide with details on what characteristics the dataset needs to have (e.g. accepted file format, required variables, etc.).
Link is there, but currently going to Github homepage
[x] The user will attempt to upload the dataset. The platform at this point will check the dataset for issues that would hinder its use (e.g. wrong format, missing required variables, etc.) and display an error if an issue is found.
With "Currently, this application only accepts .csv files. Please make sure your file is in the .csv format before uploading." red type denotes an error, it's use before an error has occurred isn't best practice. I also think we could better convey this with the placholder text. Instead of "mydata.csv" you could have "csv files". This prompts the user to the fact.
The Upload Complete loader should disappear after the upload is completed.
When I click on the "Upload HFC Data" text and subsequent tool tip, it launches the file explorer. Only the browse button should do that.
Not sure what errors should look like here? The datasets I used did not throw up errors. Would be good to understand what an error here would look like?
Ability to expand the Explore dataset and Dataset Names section might be useful in the user wants to dive in without the smaller space limitation.
[x] If no issue is found, the user will then be directed to the Parameter Selection page.
Currently no clear feedback to move to the next step beyond what is stated on the introduction page about the sections and tabs. Perhaps we can prompt the user somehow to proceed. Not sure how intuitive this is?
[x] Once the data is uploaded, the user will be requested to select a set of high-frequency checks to run on the data, and to select apposite parameters for each check.
This functionality now exists in "Check Selection and Setup" section. Additionally from what I can see only duplicate checks are currently available.
I think we need to determine what checks to include for the MVP and remove the empty dropdown rows if the functionality is not populated.
The issue with the dropdown selection should be fixed from a usability perspective.
[ ] The user will at this stage also have the option to upload an HFC metadata file that pre-loads their defined set of checks and parameters.
Is this requirement still relevant @mfiorina?
[ ] For some checks (e.g. enumerator-level checks), the user will be asked to identify one or multiple variable names necessary for the platform to create the check.
Is this requirement still relevant @mfiorina?
[ ] For some checks (e.g. tracking progress), the user may be requested to upload a separate dataset (e.g. a sample dataset with the full list of expected household IDs).
Is this requirement still relevant @mfiorina?
[ ] Once the user has selected the set of checks and parameters for their HFCs, the Platform will provide the option to download a file containing the HFC’s metadata. This file can be shared and re-used to set up the HFCs in another session or by another user.
Is this requirement still relevant @mfiorina?
[x] Once the user has defined their HFCs and addressed any issues, they will click on a button to prompt the Platform to run the HFCs.
[ ] If there was a critical issue running the HFCs that blocked their completion, the user will be notified of this. The error message will attempt to detail where the error occurred, and suggest a method to fix the error in the survey data or parameters selected. The user will then be able to address the issue.
I wasn't able to effectively test this error, it would be good to walk thru this @mfiorina to document how this would look.
[x] If the user has successfully defined the HFCs and addressed any issues that appeared, they will then be directed to the HFC Output page.
This is occurring, but unsure if we are providing adequate feedback to the user to go to the next step. I can think about how better to communicate this transition thru the UI.
[ ] The user will be presented with a display of their HFC process’s outputs. These will be organized in multiple tabs (e.g. enumerator checks, village-level checks, outliers, etc.) and consist of a set of tables, graphs, and text that will help them identify and address issues with their data.
Currently outputs is only supporting duplicates.
[ ] Each table/graph will have an “export” option (.png, .csv, etc.) that allows for individualized sharing. Ideally, the tables will be filterable and sortable.
Can only see .csv for downloading the table.
Table is searchable
[ ] At this stage, the user will have an option to export the full set of HFC outputs, either into an HTML or PDF file. The user will also have an option to create a shareable link to direct users to this page.
works great overall with a few bugs.
Notes
These are here to provide context on progress developing the Shiny app.
Remaining functionalities to implement before October testing:
Noted struggles with Shiny coding to resolve:
Purpose
The purpose of this round of testing is to:
Identify issues/concerns with launching the
iehfc
application..Rproj
file, instead allowing users to install the package usingremotes::install_github()
. Creating a proper package environment is challenging, so the.Rproj
method is a good method to allow testing in the meantime.Comments on user interface
Functionalities
Any other comments?