protocol / research

Research at Protocol Labs
220 stars 20 forks source link

Quantitative Risk Assessment for Research Goals and Decision-Making #16

Closed jpeg07 closed 4 years ago

jpeg07 commented 4 years ago

Quantitative Risk Assessment for Research Goals and Decision-Making

PL Research is currently seeking to articulate and implement a quantitatively-oriented metric (or metrics) for assessing risk in overall research goals and trajectories, as well as at key decision-tree points along the way. The goal is to encourage reasonable risk taking that scales in a yet-to-be-determined way in relationship to anticipated research impact. As such, insight into the related metric of measuring impact is also welcome.

Ideally, researchers should be able to sort through ideas and order (or at least partially-order) them as informed by risk/impact assessment.

What we are requesting: If you are familiar with literature on metrics for assessing research risk (either on its own or in relationship to expected impact), please share. If you have any successful experience applying a metric like this to guide research decisions, please share.

Within the PROJECT Ecosystem

Currently pursuing possible metrics, and wanting to get input here as a significant part of that pursuit.

Within the broad Research Ecosystem

Current NIH guidelines for justifying the use of human subjects in research rely on qualitative support for human subjects based on degree of risk, quality of protection for human subjects, potential benefits to subjects and others, and the importance of knowledge to be gained. As far as I can tell, there is no satisfactory means of quantifying these categories. I attended to this type of research as it is the most obvious field in which research in light of risk would need to be evaluated and justified.

What is the impact?

Having a metric to evaluate risk of research choices toward a goal (and the goal itself) would help minimize wasted time/effort and appropriately order research priorities.

What defines a complete solution?

A metric toward this end would necessarily need to be tested in multiple specific cases and over time to see if the qualitative and quantitative metrics hold up against actual results (both in terms of risk and impact). A successful solution would consistently map subjective expectations to actual results (i.e. quantitative values in this scenario would naturally arise from individuals doing their best to anticipate risk and outcome; therefore an ideal metric would provide ways of stabilizing the variables of uncertainty and human bias).

jpeg07 commented 4 years ago

@nikkolasg - added a short paragraph above to convey the actual request:

What we are requesting: If you are familiar with literature on metrics for assessing research risk (either on its own or in relationship to expected impact), please share. If you have any successful experience applying a metric like this to guide research decisions, please share.

Research PM is in the process of investigating available literature, but we wanted to open it up for input if anyone has any. If the issue just sits here for a while, we will know that there either isn't a lot of interest, or there isn't much out there (that we've sourced).

jsoares commented 4 years ago

I haven't had the time to look at the references but there's some on sponsored university R&D here: https://files.eric.ed.gov/fulltext/EJ980462.pdf

jsoares commented 4 years ago

I linked this elsewhere but the method for assessing research benefit described in the paper starting on page 58 incorporates risk implicitly: https://www.ncura.edu/Portals/0/Docs/RMR/v16n1.pdf?ver=2015-03-16-163003-000

jpeg07 commented 4 years ago

Thanks @jsoares - the 2nd one has some really helpful and tailorable approaches, and some ways of modeling data that I think might be useful. I will take a look at the first article soon.

jpeg07 commented 4 years ago

Hi All - here is a very much in-process draft document on which your feedback is very welcome. @jsoares has already given some helpful feedback on this, and the 2nd paper linked above (ncura.edu) is very helpful in cross-conversation with this draft.

miyazono commented 4 years ago

I read the draft document and added a few comments. I'm intrigued to read the references, but don't think I should prioritize that now.

Overall, I think this is an interesting direction, but it may be a heavier process than we need right now (though I'm open to persuasion.

davidad commented 4 years ago

I would like to put in my usual plug for Ronald Howard's text "Foundations of Decision Analysis". It is not specific to research, although it originated from Howard consulting with GE about deciding whether to pursue a certain applied research problem. In the context of research, I'm especially concerned with:

  1. "Removing human bias" inevitably trades off against incorporating implicit human knowledge. I have a sense that implicit knowledge may be generally undervalued in the tech sector, and such knowledge only gets more important when considering research initiatives that are further from applications.
  2. Quantitative metrics are often used "wrongly," by which I mean, in a way that's actually inconsistent with Bayesian decision theory. (For example, using internal rate of return to rank or filter investment opportunities.)

I think Howard's processes for analyzing decisions avoid both of these hangups, while also providing ample tools to extract people's internal causal models onto paper (or even code) so they can be reflected upon by oneself and others. If we want to start heavyweight processes in order to make decisions more consistent and calibrated, I would start with that style, which is really about fixing process (what shape do models have, what operations does one perform on a model to refine it, how does one compute an answer from a model, etc) rather than fixing particular models for how to evaluate everything that might come up.