clingen-data-model / allele

Documentation for data model of ClinGen
10 stars 2 forks source link

CodeableConcept for CriterionAssessment.outcome #175

Closed cbizon closed 3 years ago

cbizon commented 8 years ago

What are the values for this?

Is it something like: Pathogenic Very Strong Pathogenic Strong ... Benign Strong Not Applicable?

bpow commented 8 years ago

I'll summarize my thoughts from last week's call. I'd welcome input from those who see things differently from me.

There are two pieces of information regarding the outcome of a CriterionAssessment. Firstly, whether the criteria is satisfied or not. Secondly, if satisfied, then the "weight" which which the evaluated criteria would be used in evaluating pathogenicity in the overall framework. The latter is needed within the ACMG interpretation framework because the guidelines allow for an analyst to apply a criteria but give it a different weight than usual:

To provide critical flexibility to variant classification, some criteria listed as one weight can be moved to another weight using professional judgment, depending on the evidence collected. For example, rule PM3 could be upgraded to strong if there were multiple observations of detection of the variant in trans (on opposite chromosomes) with other pathogenic variants (see PM3 BP2 cis/trans Testing for further guidance).

(Richards et al 2015)

@cbizon and I had a subsequent discussion off-line (without reaching a conclusion) about whether there should be some variability in representation of "not satisfied", for representing the two cases of:

So, my proposal would be the following:

Where "Criterion not met" and "Unable to evaluate" are basically considered "identity" weights (that is, 0 if using an additive analogy, or 1 if using a multiplicative analogy).

rrfreimuth commented 8 years ago

Great write-up. I particularly liked your comments regarding interdependent attributes and “model smell”. We seem to have a lot of that in HL7 and it is sometimes hard to find the balance between model elegance and practicality.

It’s hard to argue against the approach you describe unless more information needs to be captured about why "Criterion not met" and "Unable to evaluate" were selected. Do we need to support Evidence for those options? You gave great examples:

• There is insufficient information to evaluate the criterion (e.g. there is no reasonable estimate of populate prevalence) • There is sufficient info to evaluate, and the criterion is clearly not met (e.g. the condition is very rare and the general population frequency of an allele is some crazy high number like >5%).

If this is the case, one way to approach the model without creating interdependent fields is to define the different types of possible results of a CriterionAssessment: met, not met, unable to evaluate. Those 3 types could be specializations of a more generalized CriterionAssessmentOutcome class, which is associated to CriterionAssessment and has a way of capturing supporting Evidence. The “met” class (only) could have a required association to a “result” class that contains an attribute for the PVS..Benign value set and an attribute for “weight”. Any other attributes that might be needed could be added to the respective classes.

I hope this makes sense – I tried to crank this out before my next meeting. Note that I’m not necessarily arguing for this approach, rather I’m offering it as an option if we wanted a bit more flexibility in the model (at the expense of complexity).

Thanks, Bob

From: Bradford Powell [mailto:notifications@github.com] Sent: Monday, May 30, 2016 8:01 PM To: clingen-data-model/clingen-data-model Subject: Re: [clingen-data-model/clingen-data-model] CodeableConcept for CriterionAssessment.outcome (#175)

I'll summarize my thoughts from last week's call. I'd welcome input from those who see things differently from me.

There are two pieces of information regarding the outcome of a CriterionAssessment. Firstly, whether the criteria is satisfied or not. Secondly, if satisfied, then the "weight" which which the evaluated criteria would be used in evaluating pathogenicity in the overall framework. The latter is needed within the ACMG interpretation framework because the guidelines allow for an analyst to apply a criteria but give it a different weight than usual:

To provide critical flexibility to variant classification, some criteria listed as one weight can be moved to another weight using professional judgment, depending on the evidence collected. For example, rule PM3 could be upgraded to strong if there were multiple observations of detection of the variant in trans (on opposite chromosomes) with other pathogenic variants (see PM3 BP2 cis/trans Testing for further guidance).

(Richards et al 2015http://www.nature.com/gim/journal/v17/n5/full/gim201530a.html)

@cbizonhttps://github.com/cbizon and I had a subsequent discussion off-line (without reaching a conclusion) about whether there should be some variability in representation of "not satisfied", for representing the two cases of:

So, my proposal would be the following:

Where "Criterion not met" and "Unable to evaluate" are basically considered "identity" weights (that is, 0 if using an additive analogy, or 1 if using a multiplicative analogy).

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/clingen-data-model/clingen-data-model/issues/175#issuecomment-222571582, or mute the threadhttps://github.com/notifications/unsubscribe/ALAhBkAuALH5EV88FSulnY71ZaHaqIehks5qG4hlgaJpZM4IifVh.

srynobio commented 8 years ago

Sorry to jump into the discussion late but does anyone know if these values have associated LOINC codes? Free text values are good for visualization, but codes will be important as well.

rrfreimuth commented 8 years ago

I completely agree that coded values would be best.

As far as I am aware, these terms are not in LOINC. We could talk to LOINC about adding them, but LOINC operates more at the test interpretation/result level; the assessment of individual variants might be too low level for them. That said, they might be interested in creating a value set or two to support ClinGen. ☺ I’d be happy to mediate those interactions – I just did the same for CPIC (PGx interpretations).

From: Shawn Rynearson [mailto:notifications@github.com] Sent: Tuesday, May 31, 2016 12:04 PM To: clingen-data-model/clingen-data-model Cc: rrfreimuth; Comment Subject: Re: [clingen-data-model/clingen-data-model] CodeableConcept for CriterionAssessment.outcome (#175)

Sorry to jump into the discussion late but does anyone know if these values have associated LOINC codes? Free text values are good for visualization, but codes will be important as well.

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/clingen-data-model/clingen-data-model/issues/175#issuecomment-222753293, or mute the threadhttps://github.com/notifications/unsubscribe/ALAhBgROZdPns8aXimPdedBBOBS2j1W0ks5qHGn3gaJpZM4IifVh.

rrfreimuth commented 8 years ago

Thanks for the great discussion today. I think we’re in agreement about the concepts in the model (which is always helpful), regardless of whether we represent them one way or another. A slightly updated (and clarified?) diagram is attached (and checked into the VP server).

[cid:image001.png@01D1BC10.91BCC060]

Restating the question on the table: Should we capture the outcome of an assessed criterion in a single field (e.g., “pathogenic strong”) or in two fields (e.g., “pathogenic”, “strong”)? The way the information is stored and the way it is displayed could be different, of course.

Fundamentally, I think this is a question of terminology management. In general, pre-coordinating terms can result in headaches later as it makes maintenance and semantic inference more complex, but it can simplify small value sets in the short term. Does ACMG anticipate changes to the set of terms used to describe level of evidence? Precoordination would make mappings between “old” and “new” assessments more difficult.

Now that I’ve had some time to step back and clarify my thoughts, I wanted to document my understanding (and give you all a chance to correct me as needed). I am about to restate what is probably common knowledge. You can stop reading now, if you’d like. Going back to this excerpt: To provide critical flexibility to variant classification, some criteria listed as one weight can be moved to another weight using professional judgment, depending on the evidence collected. For example, rule PM3 could be upgraded to strong if there were multiple observations of detection of the variant in trans (on opposite chromosomes) with other pathogenic variants (see PM3 BP2 cis/trans Testing for further guidance). My understanding was that PM3 would normally be considered “moderate” evidence for pathogenicity when met (table 3), but in some cases a PM3 could be considered as a vote for “strong” evidence for pathogenicity. In that case, the criterion (PM3) isn’t changed at all but it’s weight when used as part of the evidence aggregation calculation is “strong” rather than “moderate”. Restated using the example above: Criterion PM3 (recessive disorder; variant detected in trans with a [different, known-to-be-] pathogenic variant) is evaluated for variant 123. The criterion is met, but multiple observations of this variant in trans with other pathogenic variants were documented so rather than counting as “moderate” evidence of pathogenicity, this criterion is determined to count as “strong” evidence of pathogenicity. Therefore, in table 5 (of the guideline) the outcome of this criterion is not considered an instance of “moderate” but one of “strong”. PM3 => strong evidence for pathogenicity Therefore, the criteria listed in parentheses in table 5 are not set in stone (since PM3 is not always “moderate”). Furthermore, it is the resulting level of evidence (“very strong”, “strong”, “moderate”, “supporting”, or “stand-alone”, as determined from tables 3 and 4, and after upgrades or downgrades) that should be used during aggregation to determine pathogenicity rather than the criteria codes themselves. To assess a given variant (in pseudocode): # evaluate criteria For each criterion (right columns of tables 3 and 4) If ( criterion met ) Look up evidence level (left columns of tables 3 and 4) Apply modifier (upgrade or downgrade), document evidence for doing so # this is what I tried to model today Store modified evidence level (with parent category of “pathogenic” or “benign”) # combine evidence from satisfied criteria Tally number of each modified evidence level (under each category of “pathogenic” or “benign”) Look up pathogenic classification (pathogenic vs likely path vs path not met) (table 5) Look up benign classification (benign vs likely benign vs benign not met) (table 5) # determine pathogenicity If ( path classification == not met && benign classification == not met ) then overall classification = “uncertain signif” # have neither result If ( path classification != not met && benign classification != not met ) then overall classification = “uncertain signif” # contradictory evidence If ( path classification != not met ) then overall classification = path classification # have path result only If ( benign classification != not met ) then overall classification = benign classification # have benign result only In summary, I think we need to capture the following for each criterion assessment: - ``` Criterion code (e.g., PM3) ``` - ``` Assessed evidence ``` - ``` Assessment outcome (e.g., met, not met, can’t assess) (and reason why chosen?) ``` - ``` If met, the evidence level (referred to as “weight” during today’s call) assigned, which may have been modified (and rationale for modification?) ``` Please correct me if my understanding is incorrect. From: Freimuth, Robert R., Ph.D. Sent: Tuesday, May 31, 2016 12:00 PM To: 'clingen-data-model/clingen-data-model' Subject: RE: [clingen-data-model/clingen-data-model] CodeableConcept for CriterionAssessment.outcome (#175) Great write-up. I particularly liked your comments regarding interdependent attributes and “model smell”. We seem to have a lot of that in HL7 and it is sometimes hard to find the balance between model elegance and practicality. It’s hard to argue against the approach you describe unless more information needs to be captured about why "Criterion not met" and "Unable to evaluate" were selected. Do we need to support Evidence for those options? You gave great examples: • There is insufficient information to evaluate the criterion (e.g. there is no reasonable estimate of populate prevalence) • There is sufficient info to evaluate, and the criterion is clearly not met (e.g. the condition is very rare and the general population frequency of an allele is some crazy high number like >5%). If this is the case, one way to approach the model without creating interdependent fields is to define the different types of possible results of a CriterionAssessment: met, not met, unable to evaluate. Those 3 types could be specializations of a more generalized CriterionAssessmentOutcome class, which is associated to CriterionAssessment and has a way of capturing supporting Evidence. The “met” class (only) could have a required association to a “result” class that contains an attribute for the PVS..Benign value set and an attribute for “weight”. Any other attributes that might be needed could be added to the respective classes. I hope this makes sense – I tried to crank this out before my next meeting. Note that I’m not necessarily arguing for this approach, rather I’m offering it as an option if we wanted a bit more flexibility in the model (at the expense of complexity). Thanks, Bob From: Bradford Powell [mailto:notifications@github.com] Sent: Monday, May 30, 2016 8:01 PM To: clingen-data-model/clingen-data-model Subject: Re: [clingen-data-model/clingen-data-model] CodeableConcept for CriterionAssessment.outcome (#175) I'll summarize my thoughts from last week's call. I'd welcome input from those who see things differently from me. There are two pieces of information regarding the outcome of a CriterionAssessment. Firstly, whether the criteria is satisfied or not. Secondly, if satisfied, then the "weight" which which the evaluated criteria would be used in evaluating pathogenicity in the overall framework. The latter is needed within the ACMG interpretation framework because the guidelines allow for an analyst to apply a criteria but give it a different weight than usual: To provide critical flexibility to variant classification, some criteria listed as one weight can be moved to another weight using professional judgment, depending on the evidence collected. For example, rule PM3 could be upgraded to strong if there were multiple observations of detection of the variant in trans (on opposite chromosomes) with other pathogenic variants (see PM3 BP2 cis/trans Testing for further guidance). (Richards et al 2015http://www.nature.com/gim/journal/v17/n5/full/gim201530a.html) - One item of discussion was the name of the field: "outcome", for which it was thought there may be misleading connotations for clinicians. If the two different types of info above are split in two, they could be "isSatisfied" and "weight". Or if considered together, they are the "result" or "value" if you think of CriterionAssessment as being a function (which takes many inputs and is applied by an analyst). I think that either of these terms would be familiar with respect to other labs (the word "result" gets thrown around for intermidiate values and for the final interpretation of a test, but there may be some benefit in being more specific with the two terms-- I am thinking of a test like plasma amino acids where there are a number of individual values, but the final result is an interpretation by a lab director). - The other discussion regarding this was whether the two pieces of information should be represented in one field or two in the data model for the CriterionAssessment. My perspective is that if we split them in two then "isSatisfied" and "weight" are interdependent ("weight" is only valid as non-null if "isSatisfied" is true). Interdependent fields are to me a code smell (data model smell?) since they enact a requirement for external validation rather than ensuring that data must be valid by the specification of the data model itself. And no one has volunteered to write a validation engine... @cbizonhttps://github.com/cbizon and I had a subsequent discussion off-line (without reaching a conclusion) about whether there should be some variability in representation of "not satisfied", for representing the two cases of: - There is insufficient information to evaluate the criterion (e.g. there is no reasonable estimate of populate prevalence) - There is sufficient info to evaluate, and the criterion is clearly not met (e.g. the condition is very rare and the general population frequency of an allele is some crazy high nubmer like >5%). So, my proposal would be the following: - Call this field CriterionAssessment.value - Have the weights and whether or not the criterion is satisfied represented in the same field, using allowable codes like: - Pathogenic Very Strong - Pathogenic Strong - ... - Benign Strong - Benign Stand-Alone - Criterion not met - Unable to evaluate Where "Criterion not met" and "Unable to evaluate" are basically considered "identity" weights (that is, 0 if using an additive analogy, or 1 if using a multiplicative analogy). — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/clingen-data-model/clingen-data-model/issues/175#issuecomment-222571582, or mute the threadhttps://github.com/notifications/unsubscribe/ALAhBkAuALH5EV88FSulnY71ZaHaqIehks5qG4hlgaJpZM4IifVh.
larrybabb commented 8 years ago

I parsed paragraph 2 on page 7 of the guidelines https://www.acmg.net/docs/standards_guidelines_for_the_interpretation_of_sequence_variants.pdf

and got the following:

The ACMG criterion classifications are:

                Pathogenic or Benign

                Pathogenic is used for (pathogenic or likely pathogenic)

                Benign is used for (benign or likely benign)

Each Pathogenic criterion is weighted as one of:

                Very strong PVS1

                Strong PS1-4

                Moderate PM1-6

                Supporting PP1-5

Each Benign criterion is weighted as one of:

                Stand-alone BA1

                Strong BS1-4

                Supporting BP1-6

The numbering is simply for distinct label names not a reflection of weighting.

Scoring rules use the “satisfied” criterion from above to derive a classification from the five-tier system:

                Pathogenic

                Likely Pathogenic

                Uncertain Significance

                Likely Benign

                Benign

I do agree that we need a “result” or “outcome” coded value as well as an indicator for both “unmet” (or unsatisfied) and “insufficient data to evaluate” (or unable to evaluate).

I am not convinced that it is worthwhile to model the coded values to parse out these special indicators and the Path v Benign etc…

In the end these “outcome” codes will be very specific to each CriteriaSet or Guideline that gets developed.  This ValueSet will be the “ACMG Criteria Assessment Outcome Result” and they should be coded as a Nominal List of Values.  I do not see the need for the complexity at this time, modeling it finer will distract adopters and raise complexity. It is not clear that the computational benefits will be worth the cost.

My opinion is for the list to be coded as

--- met outcomes ----

Pathogenic Very Strong

Pathogenic Strong

Pathogenic Moderate

Pathogenic Supporting

Benign Stand-alone

Benign Strong

Benign Supporting

---- not met outcome ----

Criterion Not Met

---- unevaluated outcome ----

Unable to Evaluate (or Assess?)

Comments (or reason/justification) for the final coded outcome will be provided, implementers may choose to require a reason for “unable to evaluate” as an example.

From: rrfreimuth notifications@github.com Reply-To: clingen-data-model/clingen-data-model reply@reply.github.com Date: Wednesday, June 1, 2016 at 3:25 PM To: clingen-data-model/clingen-data-model clingen-data-model@noreply.github.com Subject: Re: [clingen-data-model/clingen-data-model] CodeableConcept for CriterionAssessment.outcome (#175)

Thanks for the great discussion today. I think we’re in agreement about the concepts in the model (which is always helpful), regardless of whether we represent them one way or another. A slightly updated (and clarified?) diagram is attached (and checked into the VP server).

[cid:image001.png@01D1BC10.91BCC060]

Restating the question on the table: Should we capture the outcome of an assessed criterion in a single field (e.g., “pathogenic strong”) or in two fields (e.g., “pathogenic”, “strong”)? The way the information is stored and the way it is displayed could be different, of course.

Fundamentally, I think this is a question of terminology management. In general, pre-coordinating terms can result in headaches later as it makes maintenance and semantic inference more complex, but it can simplify small value sets in the short term. Does ACMG anticipate changes to the set of terms used to describe level of evidence? Precoordination would make mappings between “old” and “new” assessments more difficult.

Now that I’ve had some time to step back and clarify my thoughts, I wanted to document my understanding (and give you all a chance to correct me as needed). I am about to restate what is probably common knowledge. You can stop reading now, if you’d like. Going back to this excerpt: To provide critical flexibility to variant classification, some criteria listed as one weight can be moved to another weight using professional judgment, depending on the evidence collected. For example, rule PM3 could be upgraded to strong if there were multiple observations of detection of the variant in trans (on opposite chromosomes) with other pathogenic variants (see PM3 BP2 cis/trans Testing for further guidance). My understanding was that PM3 would normally be considered “moderate” evidence for pathogenicity when met (table 3), but in some cases a PM3 could be considered as a vote for “strong” evidence for pathogenicity. In that case, the criterion (PM3) isn’t changed at all but it’s weight when used as part of the evidence aggregation calculation is “strong” rather than “moderate”. Restated using the example above: Criterion PM3 (recessive disorder; variant detected in trans with a [different, known-to-be-] pathogenic variant) is evaluated for variant 123. The criterion is met, but multiple observations of this variant in trans with other pathogenic variants were documented so rather than counting as “moderate” evidence of pathogenicity, this criterion is determined to count as “strong” evidence of pathogenicity. Therefore, in table 5 (of the guideline) the outcome of this criterion is not considered an instance of “moderate” but one of “strong”. PM3 => strong evidence for pathogenicity Therefore, the criteria listed in parentheses in table 5 are not set in stone (since PM3 is not always “moderate”). Furthermore, it is the resulting level of evidence (“very strong”, “strong”, “moderate”, “supporting”, or “stand-alone”, as determined from tables 3 and 4, and after upgrades or downgrades) that should be used during aggregation to determine pathogenicity rather than the criteria codes themselves. To assess a given variant (in pseudocode): # evaluate criteria For each criterion (right columns of tables 3 and 4) If ( criterion met ) Look up evidence level (left columns of tables 3 and 4) Apply modifier (upgrade or downgrade), document evidence for doing so # this is what I tried to model today Store modified evidence level (with parent category of “pathogenic” or “benign”) # combine evidence from satisfied criteria Tally number of each modified evidence level (under each category of “pathogenic” or “benign”) Look up pathogenic classification (pathogenic vs likely path vs path not met) (table 5) Look up benign classification (benign vs likely benign vs benign not met) (table 5) # determine pathogenicity If ( path classification == not met && benign classification == not met ) then overall classification = “uncertain signif” # have neither result If ( path classification != not met && benign classification != not met ) then overall classification = “uncertain signif” # contradictory evidence If ( path classification != not met ) then overall classification = path classification # have path result only If ( benign classification != not met ) then overall classification = benign classification # have benign result only In summary, I think we need to capture the following for each criterion assessment: - Criterion code (e.g., PM3) - Assessed evidence - Assessment outcome (e.g., met, not met, can’t assess) (and reason why chosen?) - If met, the evidence level (referred to as “weight” during today’s call) assigned, which may have been modified (and rationale for modification?) Please correct me if my understanding is incorrect. From: Freimuth, Robert R., Ph.D. Sent: Tuesday, May 31, 2016 12:00 PM To: 'clingen-data-model/clingen-data-model' Subject: RE: [clingen-data-model/clingen-data-model] CodeableConcept for CriterionAssessment.outcome (#175) Great write-up. I particularly liked your comments regarding interdependent attributes and “model smell”. We seem to have a lot of that in HL7 and it is sometimes hard to find the balance between model elegance and practicality. It’s hard to argue against the approach you describe unless more information needs to be captured about why "Criterion not met" and "Unable to evaluate" were selected. Do we need to support Evidence for those options? You gave great examples: • There is insufficient information to evaluate the criterion (e.g. there is no reasonable estimate of populate prevalence) • There is sufficient info to evaluate, and the criterion is clearly not met (e.g. the condition is very rare and the general population frequency of an allele is some crazy high number like >5%). If this is the case, one way to approach the model without creating interdependent fields is to define the different types of possible results of a CriterionAssessment: met, not met, unable to evaluate. Those 3 types could be specializations of a more generalized CriterionAssessmentOutcome class, which is associated to CriterionAssessment and has a way of capturing supporting Evidence. The “met” class (only) could have a required association to a “result” class that contains an attribute for the PVS..Benign value set and an attribute for “weight”. Any other attributes that might be needed could be added to the respective classes. I hope this makes sense – I tried to crank this out before my next meeting. Note that I’m not necessarily arguing for this approach, rather I’m offering it as an option if we wanted a bit more flexibility in the model (at the expense of complexity). Thanks, Bob From: Bradford Powell [mailto:notifications@github.com] Sent: Monday, May 30, 2016 8:01 PM To: clingen-data-model/clingen-data-model Subject: Re: [clingen-data-model/clingen-data-model] CodeableConcept for CriterionAssessment.outcome (#175) I'll summarize my thoughts from last week's call. I'd welcome input from those who see things differently from me. There are two pieces of information regarding the outcome of a CriterionAssessment. Firstly, whether the criteria is satisfied or not. Secondly, if satisfied, then the "weight" which which the evaluated criteria would be used in evaluating pathogenicity in the overall framework. The latter is needed within the ACMG interpretation framework because the guidelines allow for an analyst to apply a criteria but give it a different weight than usual: To provide critical flexibility to variant classification, some criteria listed as one weight can be moved to another weight using professional judgment, depending on the evidence collected. For example, rule PM3 could be upgraded to strong if there were multiple observations of detection of the variant in trans (on opposite chromosomes) with other pathogenic variants (see PM3 BP2 cis/trans Testing for further guidance). (Richards et al 2015http://www.nature.com/gim/journal/v17/n5/full/gim201530a.html) - One item of discussion was the name of the field: "outcome", for which it was thought there may be misleading connotations for clinicians. If the two different types of info above are split in two, they could be "isSatisfied" and "weight". Or if considered together, they are the "result" or "value" if you think of CriterionAssessment as being a function (which takes many inputs and is applied by an analyst). I think that either of these terms would be familiar with respect to other labs (the word "result" gets thrown around for intermidiate values and for the final interpretation of a test, but there may be some benefit in being more specific with the two terms-- I am thinking of a test like plasma amino acids where there are a number of individual values, but the final result is an interpretation by a lab director). - The other discussion regarding this was whether the two pieces of information should be represented in one field or two in the data model for the CriterionAssessment. My perspective is that if we split them in two then "isSatisfied" and "weight" are interdependent ("weight" is only valid as non-null if "isSatisfied" is true). Interdependent fields are to me a code smell (data model smell?) since they enact a requirement for external validation rather than ensuring that data must be valid by the specification of the data model itself. And no one has volunteered to write a validation engine... @cbizonhttps://github.com/cbizon and I had a subsequent discussion off-line (without reaching a conclusion) about whether there should be some variability in representation of "not satisfied", for representing the two cases of: - There is insufficient information to evaluate the criterion (e.g. there is no reasonable estimate of populate prevalence) - There is sufficient info to evaluate, and the criterion is clearly not met (e.g. the condition is very rare and the general population frequency of an allele is some crazy high nubmer like >5%). So, my proposal would be the following: - Call this field CriterionAssessment.value - Have the weights and whether or not the criterion is satisfied represented in the same field, using allowable codes like: - Pathogenic Very Strong - Pathogenic Strong - ... - Benign Strong - Benign Stand-Alone - Criterion not met - Unable to evaluate Where "Criterion not met" and "Unable to evaluate" are basically considered "identity" weights (that is, 0 if using an additive analogy, or 1 if using a multiplicative analogy). — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/clingen-data-model/clingen-data-model/issues/175#issuecomment-222571582, or mute the threadhttps://github.com/notifications/unsubscribe/ALAhBkAuALH5EV88FSulnY71ZaHaqIehks5qG4hlgaJpZM4IifVh. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
ronakypatel commented 8 years ago

One thing I would like to add here is that Uncertain Significance can arise in two ways:

1) Uncertain Significance - due to insufficient evidence 2) Uncertain Significance - due to conflicting evidence

I think it should be good idea to separate them to distinguish two types of uncertain significance.

larrybabb commented 8 years ago

I believe the way we’ve modeled the provenance it should be clear. The Uncertain Significance will be the final interpretation. We are proposing that we have an outcome on the individual assessment of criterion to be “unable to evaluate” and then allow the assessor to add a comment describing why he/she was unable to evaluate (i.e. due to insufficient evidence or due to conflicting evidence or any other reason).  We have not decided to code the reasons for “unable to evaluate”.

From: ronakypatel notifications@github.com Reply-To: clingen-data-model/clingen-data-model reply@reply.github.com Date: Wednesday, June 1, 2016 at 5:00 PM To: clingen-data-model/clingen-data-model clingen-data-model@noreply.github.com Cc: "Babb, Larry" larry.babb@gmail.com, Comment comment@noreply.github.com Subject: Re: [clingen-data-model/clingen-data-model] CodeableConcept for CriterionAssessment.outcome (#175)

One thing I would like to add here is that Uncertain Significance can arise in two ways:

1) Uncertain Significance - due to insufficient evidence 2) Uncertain Significance - due to conflicting evidence

I think it should be good idea to separate them to distinguish two types of uncertain significance.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.