UBC-MDS / MacroView

A Shiny App designed to help nutrition conscious individuals track their macronutrient intake.
https://macroview.shinyapps.io/MacroView/
MIT License
3 stars 1 forks source link

Milestone 4 Feedback #77

Open ArmaghanSarvar opened 1 year ago

ArmaghanSarvar commented 1 year ago

Congratulations on finishing milestone 4! We can see you put a lot of work into this project, nice work! Below we list some specific feedback you can use to improve your project.

o The information requested for the submission on Canvas is not including what has been modified, the source, team member, and commit/s. o There is one improvement to the application by each team member and the changes are not only minor fixes.

samson-bakos commented 1 year ago

Hello @ArmaghanSarvar , to clarify:

For The main plots take time to load, is that referring to the fact that the plots do not render immediately upon opening, or that once the button is pressed, it takes time for the plots to update?

For the Reproducibility, where are you seeing the failed workflows? We had failing workflows in previous builds while we were debugging, but all workflows appear to be currently passing in our repo

ArmaghanSarvar commented 1 year ago

Hi.

  1. I believe both are true. The plots take a relatively long time to load in general.
  2. I am referring to the following:
samson-bakos commented 1 year ago

Regarding the plots taking a long time to load, that doesn't really have anything to do with our implementation; we're hosting this on a free external server and that makes it slow. The plots render almost instantly on the local version but there's no way to speed up the shiny apps deployment without a paid plan. The plots not immediately displaying is fair, it's a limitation of our original design that we acknowledge in our reflection. But we shouldn't lose marks for the shinyapps deployment being slow.

Regarding the workflows. I don't understand what you mean by 'change the files' . We were debugging issues with our workflows and code that resulted in failed runs in the history. But all work flows pass in the most recent deployment, which is what matters. It doesn't make sense for us to lose marks to have failed runs while debugging, especially when we were eventually able to fix those bugs.

ArmaghanSarvar commented 1 year ago
  1. Yes, I am aware of the server adoption, and the corresponding rubric set by the instructor is based on the assumption of adopting on the server. I have just realized the user needs to figure out that they need to click on the "plot sliders" button in order to see the visualizations. Before, it seemed like nothing was loading. Regarding this, I will add the 4 deducted points for the load time. However, unfortunately, I will have to deduct 3 points for user interpretability. When a first-time user opens the app, they will not see any visualization although some default parameter values have been already selected, and this is not user-friendly.

  2. That's right. But what I mean is your github release. The criteria is what we have considered in the rubric for every group.

samson-bakos commented 1 year ago

Okay thank you. -3 for the interpretation is very understandable.

I am still confused regarding the workflows as the GitHub release has the same final commit/ workflow history as the current repo state. But if it's rubric based I will ask Florencia about it.