Open jpeters7 opened 1 year ago
@cboettig, Apoorva Joshi, and Michael Gerst and I had a conversation a month or so ago talking about the EFI dashboard and the goals. It seemed like given the types of NEON forecasts that one of the best uses of the dashboards would be to learn which models perform better (or worse) and for what reasons and at what spatio-temporal time scales. This would be to promote shared community learning to accelerate the development of forecasts and also to think about "super forecasts" that might use particular approaches at different spatio-temporal scales in order to optimize predictability.
I would love to work with a group of people at the conference who would be interested in rethinking the design of a dashboard to support collaboration and learning... any takers?
@jpeters7 - If I remember correctly, one of the things that @melissakenney, Michael, and I also discussed with @cboettig had to do with making the leaderboard presentation a little more descriptive and adding some narrative to the dashboard - like by using descriptive terms rather than acronyms for models (e.g. ARIK, BIGC, etc.). In its current form, the dashboard is a bit difficult to interpret for someone who's not entirely familiar with or immersed in those specific models or challenge-based terminologies. Addressing this may also help seize the opportunity for developing a shared community learning space and to build interdisciplinary bridges.
As an EFI outsider I can confirm that these are somewhat inscrutable 😉. I'm in the midst of building a similar dashboard of a multi-model-and-ensemble comparison, and would definitely be interested in a discussion of best practices of this.
That would be great - we'd love to hear more! You'll end up hearing me say this a lot but we think about Context, Audience, and Use when designing these kinds of things and it's the reason that the same data may be presented in fundamentally different ways.
On Fri, Apr 14, 2023 at 7:02 AM Noam Ross @.***> wrote:
As an EFI outsider I can confirm that these are somewhat inscrutable 😉. I'm in the midst of building a similar dashboard of a multi-model-and-ensemble comparison, and would definitely be interested in a discussion of best practices of this.
— Reply to this email directly, view it on GitHub https://github.com/eco4cast/unconf-2023/issues/13#issuecomment-1508398798, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7DWVOSGAQYKBHGZKKM4W3DXBE4DLANCNFSM6AAAAAAV66LL2I . You are receiving this because you were mentioned.Message ID: @.***>
Thinking about Context, Audience, and Use is a nice way to frame this. Here is an attempt at using this framework applied to the 3 ways I mentioned in the first post envisioning how people could use a dashboard with the forecast. @melissakenney - I would love feedback my descriptions of the Context, Audience, and Use below. And for anyone interested in this topic, it would be great to see 1) Are there other ways you imagine the visualizations being used and 2) Is there anything you would edit/add for the Context, Audience, and Use I list for the 3 items I list below. For example, @noamross, do you have addition ways you want to use the visualizations or do you want something different from the 3 items below?
Item 1: Some people may want to compare across all forecasts while others may want to just focus in on their own forecasts and how it compares to the data.
Item 2: Want to have the capability to pick different sites and forecasts to make comparisons across sites (say along a latitudinal gradient). The Shiny dashboard does this, but the quarto dashboard is more visually appealing.
Item 3: Want to make it easy for people to access the submitted forecasts and data to create figures that can be modified for manuscripts.
@ApoorvaJ-P - yes! Let's talk about design. The ARIK and BIGC labels at the top of the plots are actually abbreviations for the NEON sites. It is an easy shorthand way to list the sites, but it does mean that people need to know the site abbreviations or go to NEON's description of the sites (https://www.neonscience.org/field-sites/explore-field-sites) to find details about the sites.
I think this also gets at the Audience. If this type of dashboard was just for people submitting forecasts, they will most likely be familiar with the different site abbreviations. But if the goal is for the general public to be able to use the dashboard, then providing additional details about the NEON sites will be good.
I think @cboettig and @rqthomas did a great job giving us the start to a couple of dashboards and if we can brainstorm as a group the different Context, Audience, and Uses people are interested in, then we can evaluate what kind of updates to make that are easy vs those that may take more time to implement.
This is really helpful because you can have both primary and secondary audiences. I think some of the uses may get refined during the Unconference -- I'm getting several different suggestions and design likely will require tradeoffs and choices. For the context, understanding the spatio-temporal scale is particularly interesting and important. For this we want to think both about how the challenge is now and how we expect it will evolve over the next few years.
It's ok to have a Dashboard that does one thing very well if some of the other things that people need to do with the data are primarily best done outside of a dashboard setup. I don't think the NEON Challenge dashboard is best designed for more public audiences -- there is a high degree of technical sophistication required and public facing dashboards would be translational products of the forecasts themselves - they should answer questions that people need to know to make some kind of decision. E.g., Do I need to take seasonal allergy meds? What's the current and predicted pollen levels?
On Fri, Apr 14, 2023 at 8:24 AM Jody Peters @.***> wrote:
Thinking about Context, Audience, and Use is a nice way to frame this. Here is an attempt at using this framework applied to the 3 ways I mentioned in the first post envisioning how people could use a dashboard with the forecast. @melissakenney https://github.com/melissakenney - I would love feedback my descriptions of the Context, Audience, and Use below. And for anyone interested in this topic, it would be great to see 1) Are there other ways you imagine the visualizations being used and 2) Is there anything you would edit/add for the Context, Audience, and Use I list for the 3 items I list below. For example, @noamross https://github.com/noamross, do you have addition ways you want to use the visualizations or do you want something different from the 3 items below?
- Item 1: Some people may want to compare across all forecasts while others may want to just focus in on their own forecasts and how it compares to the data.
- Context: Allowing teams submitting forecasts to compare a) forecast output to observations and b) skill across models either within their own set of models or across all models submitted
- Audience: Primarily teams submitting forecasts to the Challenge. But it would be nice for others who are not submitting to be able to see the results as well.
- Use: Visual assessment of how well forecasts are doing
- Item 2: Want to have the capability to pick different sites and forecasts to make comparisons across sites (say along a latitudinal gradient). The Shiny dashboard does this, but the quarto dashboard is more visually appealing.
- Context: This allows individuals and teams to start to make comparisons across sites either for their individual forecasts or across all forecasts submitted.
- Audience: Primarily teams submitting forecasts to the Challenge. But as with point 1, I think it would be nice for others who are not submitting to the Challenge to be able to see the results as well.
- Use: Visual assessment across sites of how well forecasts are doing
- Item 3: Want to make it easy for people to access the submitted forecasts and data to create figures that can be modified for manuscripts.
- Context: We want the Challenge to allow people to be able to write manuscripts either for their own models that they have submitted or as part of a broader group of all teams that have submitted models for a certain theme (or across themes). Currently, people who have written manuscripts have worked with @rqthomas https://github.com/rqthomas to get the code to download the forecast outputs.
- Audience: This is for individuals who have participated in the Challenge and who want to take the lead on creating manuscripts.
- Use: Accessing forecast output and creating figures - either the ones that are on the dashboard or making it easy for peoel to modify those figures as needed depending on their questions and hypotheses for their manuscript.
@ApoorvaJ-P https://github.com/ApoorvaJ-P - yes! Let's talk about design. The ARIK and BIGC labels at the top of the plots are actually abbreviations for the NEON sites. It is an easy shorthand way to list the sites, but it does mean that people need to know the site abbreviations or go to NEON's description of the sites ( https://www.neonscience.org/field-sites/explore-field-sites) to find details about the sites. I think this also gets at the Audience. If this type of dashboard was just for people submitting forecasts, they will most likely be familiar with the different site abbreviations. But if the goal is for the general public to be able to use the dashboard, then providing additional details about the NEON sites will be good.
I think @cboettig https://github.com/cboettig and @rqthomas https://github.com/rqthomas did a great job giving us the start to a couple of dashboards and if we can brainstorm as a group the different Context, Audience, and Uses people are interested in, then we can evaluate what kind of updates to make that are easy vs those that may take more time to implement.
— Reply to this email directly, view it on GitHub https://github.com/eco4cast/unconf-2023/issues/13#issuecomment-1508499871, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7DWVOQOGCUQMM3XLOYF5Q3XBFFXLANCNFSM6AAAAAAV66LL2I . You are receiving this because you were mentioned.Message ID: @.***>
Within PEcAn we have a potential Google Summer of Code project that will engage a student in resurrecting the R Shiny dashboard that we created for our carbon cycle forecast in the pre-NEON challenge days (parts of which went into the first EFI dashboard). One of my goals with this project is to harmonize our dashboard with EFI's, so I anticipate that we will be both borrowing and contributing ideas to the EFI dashboard (and potentially down the line, deprecating our own dashboard and just using a fork of EFI's). Some of the features that we'd potentially bring are more focused on the research community, but they do cover a few useful things that aren't in the EFI dashboard yet (e.g. more detailed diagnostics of model performance, animations, cross-site visualizations of covariance/synchrony, automated email notifications)
Interesting! Given the context, audience, and use I like the idea of thinking about commonalities in these kinds of dashboards because if there are a common core of scientific users, having a consistent design reduces the cognitive load in tool switching.
On Wed, Apr 26, 2023 at 1:54 PM Michael Dietze @.***> wrote:
Within PEcAn we have a potential Google Summer of Code project that will engage a student in resurrecting the R Shiny dashboard that we created for our carbon cycle forecast in the pre-NEON challenge days (parts of which went into the first EFI dashboard). One of my goals with this project is to harmonize our dashboard with EFI's, so I anticipate that we will be both borrowing and contributing ideas to the EFI dashboard (and potentially down the line, deprecating our own dashboard and just using a fork of EFI's). Some of the features that we'd potentially bring are more focused on the research community, but they do cover a few useful things that aren't in the EFI dashboard yet (e.g. more detailed diagnostics of model performance, animations, cross-site visualizations of covariance/synchrony, automated email notifications)
— Reply to this email directly, view it on GitHub https://github.com/eco4cast/unconf-2023/issues/13#issuecomment-1523828517, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7DWVOWCNKW77GWKVWRPAPTXDFOOBANCNFSM6AAAAAAV66LL2I . You are receiving this because you were mentioned.Message ID: @.***>
While I would need a bit more time to "grok" the outputs in the quarto and RShiny dashboards, I do really like the idea of thinking about effective dashboard/visualization design. As others have mentioned, I do think that dashboards can be somewhat prone to "tunnel vision" and/or "operation blindness" - where the dashboard and visualizations make perfect sense to those involved in modelling and/or dashboard creation, but are somewhat opaque to outside audiences.
I may open a separate issue about this, but bridging the divide to non-modellers seems to be one of the critical challenges for forecast visualization and presentation. For instance, I'm the primary model/forecast-focused staff member in my department, and often work with individuals involved in boots-on-the-ground land management and natural resource conservation. The needs for visualization for on-the-ground management are often quite different from what we might view as essential for diagnosing model performance, so thinking about ways to integrate stakeholder input seems like a good extension of this.
Super jazzed to work on this project to re-envision the NEON dashboard...we will definitely need some folks who are really good in coding R Shiny
The current dashboard is a quarto website so it does not require R shiny. The primary need is knowledge of ggplot.
Could part of this development of visualisations for the dashboard also further develop the toolbox of visualisation functions for the broader community? See https://github.com/eco4cast/vis4cast/tree/main Creating standard visualisation functions based on the standards that have been developed for forecasts and scores
@rqthomas and @cboettig have done a really great job setting up websites to view the Challenge forecast submissions both at this Shiny dashboard and this dashboard created using quarto. This idea was also discussed in the CI/Methods working group call in March. The two dashboards have different purposes with the Shiny dashboard allowing teams to confirm that their forecasts were submitted and the quarto dashboard to visually compare skills scores across forecasts. It would be great to as a group brainstorm different ways that visualizing the Challenge forecasts would be useful.
I think this may also connect a bit to @noamross's issue #9 about visualizations and interactive components for forecast uncertainty. I think this may also connect a bit to my issue #12 to create resources for conferences.
A couple of initial thoughts are:
It would also be good to have input from Melissa Kenney and Michael Gerst (I'm not sure they are on GitHub yet), given their experience working with organizations to think about how to create effective visualizations.