Closed ahwagner closed 1 year ago
Ultimately it's the steering committee that take in the evidence from the study group to make a decision. It's listed in the flowchart as:
I think there needs to be a channel where only the last bit (problem / scope) is put to the SC and the SC then dictates which of the others they require. It's very hard as a spec maintainer to know what consistutes a substantial change needing approval and what doesn't, but the message I'm hearing here is we want most things to go through the correct channels with the assumption that the trivial stuff will flow quickly. That may work, but getting REWS, landscape analysis, etc for trivial stuff is just wading through treacle.
If there was a pre-assessment by SC then much of the process could be bypassed, but with this new model it is for them to OK that bypass, not us (the spec maintainers / authors).
@ahwagner and @jkbonfield - thank you both for the thoughtful comments.
@ahwagner, looking at your comment, the topics I'm picking up are a) the potential burden of outreach to set up a study group and b) the ability to use existing work, documentation and expert knowledge (of the maintainers and consortia outside of GA4GH).
As commented on the other issue, I hadn't imagined the study group as necessarily being different from the development group in terms of who is involved but do see them as different phases for a new piece of work (including possibly fairly small but new features being added to an existing product - happy to discuss further where that line is drawn).
Considering the outreach element, yes, this does add some work. However, I don't see a way around this for us while still aiming to be the Global Alliance - at a minimum our door needs to be open and we need to make some effort to share with the wider world that we are doing a piece of work. That said, I'd see Secretariat having a role here - combined with domain knowledge from the wider contributor community, possibly to identify groups whose presence would help the work and who Secretariat (or others) could contact. This in part goes to our stated aim of creating standards of "broad utility", the need to understand requirements in different settings and being an open organisation.
For the ability to reuse existing knowledge and documentation, I'd point to item 10 in the principles at the top of the document - "Documentation generated for products during product development and approval should give clarity on the points on which decision making is based. It should not be longer than is essential. Reuse, where appropriate, is encouraged." In other words, if there is an existing report that covers the work, then that not only could but should simply be pointed to. We should not be in the business of reinventing the wheel or doing work for its own sake. If something will be incredibly obvious to, for example, the PRC, don't put together more than they need - keep it short and simple, just sufficient to demonstrate the decision making criteria.
In the VICC example above, assuming that a) the members of the consortium were comfortable with their work being in GA4GH, b) there were no IP issues and c) once in GA4GH the door was open for input from others, I don't see any obvious issues with that consortium and their work forming the basis of work either in (or possibly in collaboration with) GA4GH. By my reading, that fits with the outlined process but useful to know if that isn't clear. There would also be the path of work being defined by the VICC consortium and then being brought into GA4GH later as an existing product.
@jkbonfield - I think your points go more to the cut off between trivial changes and what constitutes more substantial work?
As an aside, I'd like to note here that the overhead of REWS and Security review should now be minimal for contributors. This would involve a meeting to run through the questions with representatives for REWS and Security - even for new products.
In terms of what constitutes a substantial change to a product and also where approval is needed, I agree, as on the other issue, that a path for bug fixes and trivial changes needs to be defined (it currently isn't). In terms of deciding if a change is trivial or not, I could see a role for the PRC to provide an oversight mechanism there.
For documentation going to SC (let's say for non-trivial changes), SC are currently being asked to judge a proposal on these criteria:
Criteria:
I'm tempted to argue that, in some cases, the answers to these will be fairly self-evident. Where that's the case, again pointing to item 10 in the principles list, the documentation should be minimal.
For updates to existing products, also noting the reuse point. If the landscape analysis is still fairly recent, don't repeat it.
I think I would want to see a list of potential adopters for new work, including updates. That said, BED is a nice recent example of where the "broad interest and utility" criteria would be unlikely to need such a list - the broad use of BED should already have been clear to SC based on their experience.
I think an important part of the argument is what is the purpose and the strategy of GA4GH: do we build standards for the sake of it, or do we aim to enable genomics and health globally through standards. The reason is that if it is the latter, then a stronger product mindset is required to ensure that the standards will indeed deliver on that aim. So, very much along the lines of what has been set above:
Product Market Fit: Is this a thing that people would like to use to sort out a major obstacle/use case? Is it such an obstacle/use case important enough that it will encourage the adoption of the standard instead of either building in isolation or not fulfilling the specific need? Are use cases and vision of how users would adopt compelling?
Can We Build It: Do we have the technical competency to design a standard in this area? What is the relationship between the standard and other technologies (for example to work at the scale envisioned)? Is the standard going to be so complicated that it will not going to be adopted [reliably]?
Can We Afford Building It / Can our stakeholders support this: Somewhat less clear in this context but something around priorities of the standard versus better use of time, or likely endorsement by the community.
I definitely would like to see some answers to these questions when reviewing a standard as an steering committee member and not simply trust the workstream or indeed the product review committee.
Thanks @arendon for adding to this. These are interesting points.
My understanding is that, no, our philosophy has not been standards for the sake of it. Instead, few standards of broad use.
If I have understood the direction of your comments, you are looking to see review of the feasibility of a given product (compelling use case, technical capability, good use of time, etc) to assist in gauging if a product merits further exploration?
Interested in further feedback here.
Also flagging further conversation on minor updates on #2
I have made some edits to the process reflecting this conversation.
These sit alongside changes made on issue #2 relating to urgent fixes, continued development and making it clearer that the study stage can be the same people but with an explicit focus on scoping work and defining the problem being addressed.
I hope these improve the overall document.
Commit links are immediately above.
In issue #2, there was a comment about the challenges of creating study groups, when we are already hard-pressed to get people to develop these specifications. While I believe everyone agrees in principle with the notion that standard development needs to occur with broad community needs in mind, convening a set of experts (beyond those already participating in the GA4GH Work Streams) for any proposed standard development may be overly burdensome to the process. In some cases the needs are largely technical and best addressed by the existing specification maintainers (speaking to the issues raised by @jkbonfield in #2), in other cases there have been large efforts that have already identified needs that could be better represented with a report than creating an ad hoc study group.
For example, in VICC we are currently working with a large clinical community (representatives from CAP, ACMG, CGC, ClinGen, and VICC) to define the minimal data elements for Gene Fusions–an effort that has taken years to build consensus guidelines, which I expect will ultimately inform the development of a corresponding data exchange model in VRS. The development of that model will then have benefited from a clinical community to drive the data requirements, and a data modeling community (the GKS VR team) to develop the nuts and bolts for the standard. It is unclear how a study group effectively complements this expertise.
Is there any way that existing efforts to build community consensus (such as the above cross-consortia study) can be used as a proposal in lieu of a new ad hoc study group? Perhaps reviewed and approved by Work Stream leads prior to further development work? And leave study group formation as an option to be exercised by those leads if they feel the groundwork for the proposal is insufficient / additional expertise is required?