lizzieinvancouver / PSPmountrainier

2 stars 1 forks source link

How to organize the two days? #2

Closed lizzieinvancouver closed 7 months ago

lizzieinvancouver commented 1 year ago

What should the precise organization be (always discussion with the entire group, smaller breakout groups, etc)?

lizzieinvancouver commented 1 year ago

@aamd wrote:

I believe strongly in smaller breakout groups, ideally in balance with discussions in the larger group

lizzieinvancouver commented 7 months ago

Here's the plan we had:

Current plan is ...

From Mike's email:

One of the difficulties with short courses is that, well, they are short. There’s only so much that can be productively communicated in a few days. Unfortunately without a strong statistical modeling and inference foundation there are not that many analysis topics that can be reasonably self-contained in that time. Instead most courses default to teaching how to apply a certain tool or technique in a limited context that doesn’t generalize particularly well to actual practice.

An alternative approach is to expose the attendees to a more comprehensive picture of a Bayesian analysis with real data, from initial brainstorming of the data generating process to model development and critique to final analysis summaries. The tradeoff is that we will have to provide much of the statistical expertise and guide the attendees through the implementation of the analysis.

First and foremost this approach would require some real data with well-understood provenance that isn’t so complex that we can’t iterate through model fits reasonably quickly. Ideally the basic details of the measurement (what is being measured, where was it measured, how is it being measured, etc) would be familiar to all of the attendees so that they can provide as much domain expertise as possible.

After attendees brainstorm the data generating process (together or perhaps broken up into groups) we trainers can lead a discussion about how to translate that brainstorming into an initial model and model checks and implement that model in Stan. At this point we can fit on a single computer together or distribute the code so that attendees can fit on their own before coming back together to discuss model critiques and brainstorm possible improvements. Ideally we would go through at least a few iterations of model development to demonstrate the practical realities of building a useful Bayesian model.

Finally we can end with a discussion of how to communicate the inferences to different communities, such as collaborators, journals, and the like.

Some of the open questions that Lizzie mentioned include:

1) What data should we use? What is it’s provenance?

2) What should the precise organization be (always discussion with the entire group, smaller breakout groups, etc)?

3) Do we want to analyze the data ourselves beforehand or do everything live based on the attendee feedback? If we analyze the data beforehand do we guide attendees to the choices we’ve already made in that initial analysis, allowing us to for example have code prepared that we can immediately distribute to attendees at various points, or do we just use that initial analysis as preparation and re-implement everything on the fly mirroring the actual analysis process?