ga4gh / refget

GA4GH Refget specifications docs
https://ga4gh.github.io/refget
14 stars 6 forks source link

Discussion on undigested attributes and sorted-name-length-pairs #40

Open nsheff opened 1 year ago

nsheff commented 1 year ago

Related discussions:

History

For years we've debated the question of whether sequence collections would be ordered or unordered. In #17 we determined that the digests will reflect order. However, it is still valuable to come up with a way to identify if two sequences collections are the same in content but in different order. While the comparison function can do this, it is not as simple as comparing two identical identifiers. After lots of debate both in-person and on GitHub (#5), we never came up with a satisfying way to do this. Here is a proposal that can solve this, and other problems, which has arisen in discussion with @sveinugu. This idea has a few components: 1. Undigested arrays; 2. Relaxing the 1-to-1 constraint imposed on default arrays; 3. A specific recommended undigested array named e.g. "names-lengths". In detail:

1. Undigested attributes

Undigested attributes are attributes of a sequence collection that are not considered in computing the top-level digest. We suggest formalizing this as a part of the specification. In the schema that describes the data model for the sequence collections, the attributes will be annotated as either "digested" or "undigested". Only attributes marked as 'digested' will be used to compute the identifier. Other attributes can be stored/returned or otherwise used just like the digested attributes, the only difference is that they do not contribute to the construction of the identifier. This is easy to implement; for returning values, you return the complete level 1 structure, but before computing a digest, you first filter down to the digested schema attributes.

2. 1-to-1 constraint

All of our original arrays have the constraint that they are 1-to-1 with the other arrays, in order. In other words, the 3rd element of the names array MUST reference the 3rd element of the sequences array, etc.. They must have the same number of elements and match order.

I propose this constraint be relaxed for custom arrays/attributes. In fact, this, too, could be specified in the JSON-schema, as order-length-index: true (or something). In other words, arrays that do align 1-to-1 with the primary arrays would be flagged, but not all attributes would have to. This would be useful in case users want to store some information about a sequence collection that is not an ordered array with 1 element matching each sequence, or for situations that need to re-order.

3. The names-lengths array/attribute

Next, we'll use both these features for a new proposal. An example of a useful undigested attributed that lacks the 1-to-1 constraint is the names-lengths array. This is made like this:

  1. Lump together names and lengths from the primary arrays into an object, like {"length":123,"name":"chr1"}.
  2. Canonicalize according to the seqcol spec.
  3. Digest each name-length pair individually.
  4. Sort the digests lexographically.
  5. Add as an undigested, non-1-to-1 array to the sequence collection.

If this array is digested it provides an identifier for an order-invariant coordinate system for this sequence collection. It doesn't actually affect the identity of the collection, since it's computed from the collection, so there's no point considering it in the level 0 digest computation -- that's why it should be considered undigested. It also does not depend on the actual sequence content, so it could be consistent across two collections with different sequence content, and therefore different identifiers. It lives under the names-lengths (or something) attribute on the level 1 sequence collection object, but it's not digested.

The term "undigested"

To clarify the digesting: in this case, I am actually proposing to digest this array itself in the same way that all the arrays get digested; but this array would not get included in level 1 object that digests to form the level 0 digest. So maybe "undigested" is not the right term. Ideas?

Add to spec?

There are lots of other non-digested arrays that could useful. This particular one is pretty universal and useful, so it seems like it may be worth adding formally to specification as a RECOMMENDED, but not required, undigested array.

sveinugu commented 1 year ago

Thanks, @nsheff for a clear writeup and concrete suggestions for how to incorporate our ideas into the specification in a simple and non-intrusive way! Having spent so much time debating this issue, I think it would be awesome if we were able to add the "names-lengths" recommendation to the v1.0 specification!

I believe I am all for the suggestion as it stands logically/algorithmically. I will think through the naming of the concepts/array and perhaps have some other suggestions for those, but that is in any case a minor issue.

Some comments on the usefulness of this (this is a sort of summary of pages with comments I made earlier, but hopefully this time my comments are briefer and easier to follow):

At the top of our rolling minutes document there is a motivational sentence that I assume has been there from the beginning: "Let’s make the world slightly more interoperable and end the chr1 vs 1 debacle of bioinformatics!". I do believe the above suggestion brings us much closer to actually following through on this intention. Here's why:

This leads me to what I believe is the main advantage of the solution proposed above:

If implemented in practice by the major adopters of the seqcol standard, the mapping between the two identifier types will be directly available within all sequence collection records, in the relationship between the 0-level digest and the 1-level digest of the names-lengths array!"

The alternative will be for third parties to extract seqcol records, calculate the corresponding order-invariant coordinate system digest/identifiers and host the mapping independently, without the foundation of a standard to lean on. In practice, I don't believe this path will end up providing a realistic and universal solution to the issues.

So to conclude where I started:

How can this then help solve the chr1/1 schism?

A very nice property of the names-lengths digest in particular is that for all sequence collection variants with differing sequence contents (e.g. different patches) and different ordering of sequences, if only they share the same set of names and lengths, they will end up with the same order-invariant names-lenghts digest. There will in this case be only one single digest representing the order-invariant coordinate system of all the "chr1"-type variants, while another single digest will represent the order-invariant coordinate system of all the number-based variants. Consequently, the mapping between these two digest/identifier-types will be simple. So one can then easily map from any of the "chr1"-type sequence collections to any of the compatible number-based collections by merely extracting the level 1 contents, as well as consult a simple map that relates the relatively limited set of order-invariant coordinate system identifiers across the schism. These identifier-to-identifier mappings can then be easily represented by third parties in "semantic web"-style knowledge bases, which again is very suitable as input for AI-based methodologies. All this depends on the adoption by the major sequence collection deposition repositories.

(The only caveat with the above that I can see now is the issue of subsets of sequences. So coordinate systems are in practice often compatible if one is the subset of the other, or if the only differences appear in the "non-priority" sequences, such as the alternative sequences that are often added in reference genome patches. A variant of the names-lengths array that also filters the sequences according to a priority array could then possibly be the best solution in the context of genome browsers. That would represent an example of another useful "undigested" array type that will be possible as a consequence of the two conceptual additions suggested here. However, adding a priority-related array or other similar variants to the specification will very easily open up for the kind of drawn-out discussions that will further delay the delivery of v1.0 and which we therefore should avoid. Which is why (I assume) that @nsheff limited this suggestion to include only one extra recommended, undigested array: the names-lenghts one.)

My conclusion:

At the very least, the implementation of the names-lengths array is pretty straightforward and will, in the context of the specification text itself, function as an illustrative example of the usefulness of the suggested additional array properties, should we decide to add them. More importantly, I believe the names-lengths array in itself will provide major benefits to the community if adopted, with the main contribution being the possibilities of direct mapping between identifiers across the schism (as I have argued for above). Including that array as a recommendation will furthermore sketch out a direction for future development of the specification, while at the same time open up the possibility for third parties to straight away implement custom extensions in the same vein, within the bounds of the v1.0 specification. Lastly, we will also get some solid results out of the time we have spent discussing the order issue!

So that's my (more than) two cents!

(Note: I always end up writing too long texts, which is why I am very happy that @nsheff wrote the actual suggestion above! )

nsheff commented 1 year ago

Here are some terminology ideas I had, for what terms the schema would use to describe the attributes:

e.g.


type: object
properties:
  lengths:
    type: array
    description: "Number of elements, such as nucleotides or amino acids, in each sequence."
    items:
      type: integer
  names:
    type: array
    description: "Human-readable identifiers of each sequence, commonly called chromosome names."
    items:
      type: string
...
required:
  - lengths
inherent:
  - lengths
  - names
  - sequences
collated:
  - lengths
  - names
  - sequences
  - topologies
sveinugu commented 1 year ago

Here are some terminology ideas I had, for what terms the schema would use to describe the attributes:

I like this very much. Very precise terms and natural way to specify in the JSON schema.

For what it's worth as someone with English as the second language, I did not really before now know the precise meaning of the word "collate", I thought it had a more general meaning, like "collect", "organize" or similar. But that might just be a random hole in my knowledge. So if this is a word that is assumed to be understandable for the typical reader/user, I am all for it. Seems very precise.

sveinugu commented 1 year ago

I think "Inherent" is spot on!

nsheff commented 1 year ago

"collate" is typically used when you're printing multiple copies of a multi-page document; non-collated would be like, page 111222333, collated would be 123123123 -- So the collated approach is ready for binding. So it essentially means, "each unit in its proper order".

I spent some time searching for a word to mean the 1-to-1 idea and "collated" just seemed like a match that captures that whole concept in a single word.

nsheff commented 1 year ago

I have a new implementation now that handles inherent attributes and also provides names-lengths arrays. I'm working on a few demos at: https://seqcolapi.databio.org/

nsheff commented 1 year ago

Here's a function that computes the names-lengths array as described above: https://github.com/refgenie/seqcol/blob/46675b669ae07db9da4fc3d113fefa2c1667b1fb/seqcol/seqcol.py#L300-L310

nsheff commented 1 year ago

And here's my code for handling inherent properties:

https://github.com/databio/henge/blob/b53333072a51a0bd532f14d3ed949dd02e67d6be/henge/henge.py#L541-L552

https://github.com/databio/henge/blob/b53333072a51a0bd532f14d3ed949dd02e67d6be/henge/henge.py#L376-L383

tcezard commented 1 year ago

I was wondering about the relationship between the different type of properties: I was assuming that if a properties is inherent then it has to be collated from the fact that all non-collated properties I can think of are metadata that do not define the collections. That might not be something the specification wants to enforce though.

Also I was thinking that we should specify how the collated properties should be handle in the comparison function i.e. they should be part of the comparison process similar to what @nsheff has implemented here but what about the non-collated attribute

sveinugu commented 1 year ago

On the question whether the non-collated array sorted_names_lengths (or what we end up calling it) should be required, I think we should consider issue #36, which I don't think we have discussed much. Basically, if one seqcol is a reordering of the other, the current comparison endpoint does not discern whether array values go out if sync (1-to-1) with respective values in other collated arrays (e.g. if a sequence ends up with the wrong name). Since the lengths are derived from the sequences, a sorted_names_lengths array will in most cases fix the issue for the core arrays, as the digests per names-lengths pair will change if the core arrays get out of sync (assuming the sequence collection is still technically valid by lengths and sequences being in sync). The only exception is in cases where different sequences in the same seqcol have the same length.

One possible full solution across all arrays would be to define a new collated and non-inherent array which consists of a digest of the full row-wise representation of each element across all other collated arrays (inherent or not). The contents of this array (pre-digest) would, I believe, look very similar to the row-wise seqcol idea we had in the very beginning. However, adding such an array is not on the table now, and even if it gets on the table later, that is something that I believe should happen based on other needs. (Note that such an array is different from an array that provides row-wise identifiers, which I have argued for elsewhere, as an identifier array would be limited to only include values from other arrays that are both collated and inherent!) I just wanted to mention this option to complete the picture.

Given that we want to fix issue #36 (and I do think we should), I think there are now two main options:

a) keep the comparison function as it is but make the sorted-names-lengths required, at least if the comparison function is implemented. This would cover most cases (but only for the core arrays) and it requires very little change. Or

b) add another output in the comparison function that provides a check on whether the array values are in sync of not. This separates fixing issue #36 from the issue of whether sorted_names_lengths should be required. Separating these two issues is definitely cleaner, but will on the other hand require extending the comparison endpoint. Also not sure how easy it is to implement such an "in-sync" check in practice.

sveinugu commented 1 year ago

While writing the above comment (https://github.com/ga4gh/seqcol-spec/issues/40#issuecomment-1592316772), I think I have myself changed opinion from supporting (a) to (b), as issue #36 is really a technical validity issue for the comparison function and providing a partial fix by adding data content just makes everything more confusing while not fully solving the issue.

So if I leave this issue aside, I see that making the sorted_names_lenghts array required could be an nuisance for very simple seqcol implementations (e.g. based on S3-buckets). I do, however, think we should clearly recommend it, and one way of pushing implementers to use it would be to "nickname" the array as the "genome browser compatibility"-array or similar, so that it is more clear what this is useful for.

nsheff commented 11 months ago

Should it be sorted_name_length_pairs or sorted-name-length-pairs?

In the past I've been using underscores; but in the compare endpoint we decided to use hyphens for the return values.

tcezard commented 11 months ago

My vote is for underscores. We should also decide on how the sorted_name_length_pairs attribute is computed I've seen at least 2 different descriptions of the attribute:

  1. {
    "sorted_name_length_pairs": [
    "chr1_103483637", 
    "chr2_23873"
    ]
    }

    or

    {
    "sorted_name_length_pairs": [
    {"names": "chr1", "lengths": 103483637}, 
    {"names": "chr2", "lengths": 23873} 
    ]
    }

    Was there another one ? Which one should we go for ?

nsheff commented 11 months ago

That's interesting -- I only knew of one suggestion and it was neither of those ;) it's:

{"length":123,"name":"chr1"}.

that's also how I implemented it... https://github.com/refgenie/seqcol/blob/46675b669ae07db9da4fc3d113fefa2c1667b1fb/seqcol/seqcol.py#L300-L310

So, that's my vote. I don't like the underscore method, I like re-using the json scheme we've already been using.

sveinugu commented 11 months ago

I second @nsheff's suggestion (and implementation) above. I like that the "name" and "length" keys are included (in singular) which makes the non-digested contents (more-or-less) self-explanatory, which might prove useful in certain scenarios. The redundancy of repeating "name" and "length" for each sequence should not matter for performance/efficiency, as (in most cases) the end product to be stored in any case are the digests, which have a fixed number of characters (32? Cannot remember right now).

sveinugu commented 11 months ago

All for underscores. Cannot remember if we discussed this for the compare endpoint, but perhaps we could still harmonize this post-ADR? There is also, I suppose, a possibility to harmonize the style across GA4GH services (isn't there a GA4GH style document? If not, shouldn't there be?)

sveinugu commented 11 months ago

An illustrative use case to be added to the specification could be something like this:

  1. A specific instance of a genome browser (for a particular reference genome coordinate system) can be fully defined logically by the level 1 digest of the sorted-name-length-pairs array of the corresponding sequence collection (as the order of the sequences does not matter).

  2. To check whether a data file is compatible with a genome browser instance one can then compare this digest with the similar level 1 digest of the sequence collection used to generate the file (assumed to be attached to the file as metadata somehow).

There are only two possibilities for compatibility:

2A) If the digests are equal, then the data file is directly compatible.

2B) If not, one can check the compare endpoint to see whether the sorted-name-length-pairs array of the sequence collection attached to the data file is a direct subset of the same array in the sequence collection attached to the genome browser instance. If so, the data file is also compatible.

3) For efficiency, if 2B is true one can add the corresponding sorted-name-length-pairs level 1 digest (for the data file) to a list of cached digests that are known to be compatible for a particular genome browser instance. In practice, this list will be short as only a handful of variants will appear in a real-life scenario (e.g. including or excluding chrM and/or alternative sequences, reference genome patches (possibly removing some unplaced sequences) and similar).

Thus, in a production setting, the full compatibility check can be reduced to a lookup into a short, pre-generated list of sorted-name-length-pairs level-1 digests known to be compatible with a specific genome browser instance. One would not even need to contact a seqcol server at all if all data files to be considered for compatibility come pre-annotated with such level-1 digests.

tcezard commented 10 months ago

I want to confirm another point brought by @sveinugu earlier: We store the digested version of the object not the object itself. The algorithm for generating the array is:

  1. For each element: a. Construct the object for each element with singular property names{"length":123,"name":"chr1"} b. Normalise with RFC-8785 c. Digest with sha512t24
  2. Sort the array lexicographically
  3. Store the resulting array

Is that correct ?

Also I for the seqcol schema the current description I use is:

"Objects composed of length and name of a sequence digested and ordered lexicographically"

But there might be a better one.

tcezard commented 10 months ago

The newly published specs now answers most of my questions above.

I have an additional question about sorted-name-length-pairs: I'm wondering if there are use cases for storing the whole array of digested objects over only storing the level 1 digest as a single value. I haven't fully worked it out but the only use case I can see it to check for subsets of name-length pairs.

sveinugu commented 10 months ago

I have an additional question about sorted-name-length-pairs: I'm wondering if there are use cases for storing the whole array of digested objects over only storing the level 1 digest as a single value. I haven't fully worked it out but the only use case I can see it to check for subsets of name-length pairs.

I think this is a very useful use case, though!

This also relates to issue #36 which we have yet to discuss and which is bugging me a bit. Storing the array of digests is a partial solution for that issue, but only for the names and lenghts arrays.

Another similar use case is just to test for the existence of a particular name-length-pair in a sequence collection, e.g. which chrM coordinate system is included in this hg19 seqcol variant? An alternative would be to find "chrM" in the names array, note the index, and find the corresponding length in the lengths array. Even though this is not very complex I believe the simplicity of just grepping for a digest will prove to be very handy for e.g. scripts or one-liners.