Open sudo-bmitch opened 3 years ago
From https://github.com/opencontainers/distribution-spec/issues/114#issuecomment-748172637:
It would be useful to get a list of json including metadata
Emphatic yes, but I'd propose returning a list of descriptors instead of just mediaType and digest: https://github.com/opencontainers/distribution-spec/issues/22#issuecomment-470727620
Whether that's in the form of an image index or just a json array or even json-lines, I don't care that much, but I would really love for this to be available. cc @justincormack this is related to what I was saying in the OCI call a couple weeks ago
@jonjohnsonjr seeing the recording of that meeting triggered this for me, went looking back at #22 and realized this point was rather buried. Getting this as a json list of descriptors makes a lot of sense to me. Definitely don't want the list of strings like we have with tag/list today.
Yes, descriptors seems correct.
seeing the recording of that meeting triggered this for me, went looking back at #22 and realized this point was rather buried
Excellent! That's why I brought it up 👍
From my earlier comment:
Whether that's in the form of an image index or just a json array or even json-lines, I don't care that much, but I would really love for this to be available.
I'll campaign a little bit for my image-index-as-the-API idea.
One nice feature of making this format be an actual image index (and not a list of descriptors), would be the ability to reuse code that knows how to deal with an image index already. So if you wanted to e.g. mirror an entire repository from one place to another, I already have code that can deal with pulling and pushing an image index. I can just reuse that.
Continuing the example, if you want to keep your mirror up to date, you might need to poll tag values every so often to make sure they haven't change. If we could just ask a registry to show us everything in one request, we can just poll once per repo instead of once per tag. Assuming we cache everything aggressively, this shouldn't even be that expensive.
If we wanted, we could even expose the digest of this top-level image index so that clients can essentially ask "hey, has anything in this entire repo changed?" with a HEAD request, without even having to actually look at the entire structure. This could also allow clients to ask for the state of a repo from the past, if registries allow you to GET the list of manifests by digest (though, that might be too expensive to keep around).
If we do expose the digest of this structure, then re-computing the digest on every push/delete can be somewhat expensive for registries, so they may not want to do that. I think that's fine -- perhaps the Docker-Content-Digest
header for listing manifests is optional, and we can direct clients to explicitly look for etags as a cheaper means to do this.
Another downside is that pagination could be a little weird. For enormous repositories, we probably don't want to send the entire list in one response, and paginating the entries of an image index might frighten and confuse some clients (of course, this is something we can just specify, since it's an entirely new API).
Pagination also makes the digest thing weird -- is this the digest of the whole repo? Or just this page? We may just want to punt on "digest of a repo" stuff for now.
Having consistency across all registries for discovering content is a definite need. Removing the _catalog
API cleared the way towards an implementation that could work across registries.
Might I suggest we capture the various requirements for the listing API? There are lots of great designs, but only a few that would meet the requirements we outline. For instance, when we were iterating on the Notary v2 discovery prototypes, we needed a way to discover all the artifacts that were dependent upon another artifact. eg: what signatures exist for a given digest.
We played with a few ideas, including what the schema would be. I think we considered image-index
, but realize that any listing API needs paging. If you look at the helm index.yaml
example, it was fine when there were only a few charts. But, fails when there's lots of charts. Imagine an image-index that has hundreds, or thousands of manifests? Having an API that supports paging 0, 1 to infinity means we never break. We'd also, likely want to incorporate some sort of sorting (push-date, last-pull-date, ..., annotation value, ...). I could see different registries having different search criteria enabled, but the API should be consistent. I'd actually hope we all supported some basic set of functionality, but I'm a purist with a reality check.
Here's a hacked up Notary v2 prototype-1 version, pre the newly proposed oci.artifact.manifest
. This was used to get signatures for an image. You could filter on mediaType, to only get cncf.notary.v2
artifact types. However, we didn't yet add a way to filter further on just the signatures for registry.acme-rockets.io
@aviral26 just made an update for the OCI Artifact manifest, but this is a prototype, based on the Notary v2 and Artifacts requirements.
We need to iterative further for the notary & artifact scenarios, but I'd love to see this evolve OCI Artifacts and a View of the Future So, just suggesting we capture and iterate on a list of requirements for the listing API. Then, we're debating which design meets the requirements, vs. which "design is better".
Also note, Docker has transitioned docker/distribution to CNCF. It's now located at: distribution/distribution and we have a new set of active maintainers from GitLab, GitHub, Digitial Ocean, VMWare, Docker, that want/need to keep the innovations moving forward.
We'd also, likely want to incorporate some sort of sorting (push-date, last-pull-date, ..., annotation value, ...)
Filtering based on annotation values is interesting. I don't love the other examples because they impose additional storage requirements on registries. I worry that waiting for a perfect solution (i.e. requirements for features that don't exist) will block a massive improvement ~indefinitely.
Can we agree on a simple set of requirements with possible extensions?
I think at a minimum we need:
It would be nice to have:
We should leave open for extension:
I really don't want to block on 4 and 5. To borrow your "cloud filesystem" metaphor, I agree that having a SQL-like API for querying the filesystem would be really nice, but as is our filesystem can't even list files or directories... this is bad.
We should keep filtering/sorting in mind so that implementations or specs don't preclude it, but I don't think it's mandatory, and I don't expect all registries to implement this stuff. It should be possible to implement a registry with zero logic, just static files. We should be able to define some optional querystring parameters this.
Similarly, I don't want to block this on notary concepts that are still prototypes.
a SQL-like API
Oh no, you brought up winfs - yikes :flushed:
I like the extension model or the reserved for future space approach.
Having the full list in mind, allows us to design a multi-phased approach. It's like building the house, knowing you want to add a deck or garage later. If we can get a full list, we can prioritize, while reserving space in the design for known additions.
Might I suggest a PR for manifest-list-requirements.md
that also articulates a bit of the scenarios? The PR can be in a manifest-list
branch that we iterate upon until we have an actual spec proposal that we could then review and decide how we incorporate the requirements list back into the spec as needed.
Might I suggest a PR for manifest-list-requirements.md that also articulates a bit of the scenarios?
I'm happy to send a PR, but I'd like some feedback from other registry operators. Ideally, we could talk about this on the dev call. I would personally commit to implementing this for gcr.io and pkg.dev if I could get some kind of consensus and commitment from other registries. If this is missing something that anyone feels is a hard requirement, I'd like to know. If this is too onerous to implement for any registries, I'd like to know.
It would be good enough for me if I could get any two of {Docker Hub, ECR, ACR, Quay} to implement this, but otherwise it's just an additional registry-specific thing that adds no value to end users.
I don't want to shove every possible feature you might want into these APIs. These are meant to be the lowest common denominator, bare minimum things that are inoffensive to implement and maintain, i.e. undifferentiated heavy lifting. If your registry supports some awesome feature that nobody else has, I think it belongs in a proprietary API.
As is, the only way to list repositories is to use _catalog. We told people not to use that, but offer zero alternative, so they use it. For untagged images, it's even worse. There is undefined behavior around what happens -- do images just disappear? Do they stick around forever? How do you know what exists? We really need a standard way to expose this information.
I already proposed these two additional APIs in #22, but I still think they're good places to start. They are simple and aligned with existing APIs, so they should be familiar to users. They expose information that already exists while being open to extensibility.
Reader: I implore you not to bikeshed this. If things can be simplified to make this more likely to be implemented, I'd love to hear it. If you want to attach a use case for a hypothetical future thing that may never exist, please weigh the expected value of that use case against the decreased likelihood of us reaching consensus.
Concretely, some structs:
// ManifestDescriptor describes the content of a given manifest object.
type ManifestDescriptor struct {
// MediaType is the media type of the object this schema refers to.
MediaType string `json:"mediaType,omitempty"`
// Digest is the digest of the targeted content.
Digest digest.Digest `json:"digest"`
// Size specifies the size in bytes of the blob.
Size int64 `json:"size"`
// Annotations contains arbitrary metadata relating to the targeted content.
Annotations map[string]string `json:"annotations,omitempty"`
// Tags contains a list of tags associated with this object.
Tags []string `json:"tags,omitempty"`
}
// ManifestDescriptorList is a list of manifest descriptors for a given repository.
type ManifestDescriptorList struct {
// Manifests references manifest objects.
Manifests []ManifestDescriptor `json:"manifests"`
}
This is a trimmed down version of Index
and Descriptor
to omit anything that doesn't make snse, with Tags
added.
It might make sense to keep specs.Versioned
and Annotations
in ManifestDescriptorList
. It might also make sense to just use Index
directly, but then we'd need to figure out how to do Tags
. I've omitted Platform
from the descriptor because it's not something that is known at push time.
As an example:
GET /v2/<name>/descriptors/list
{
"manifests": [{
"digest": "sha256:7a47ccc3bbe8a451b500d2b53104868b46d60ee8f5b35a24b41a86077c650210",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 2035,
"tags": ["latest", "v1"],
"annotations": {
"org.opencontainers.image.created": "1985-04-12T23:20:50.52Z"
}
},{
"digest": "sha256:3093096ee188f8ff4531949b8f6115af4747ec1c58858c091c8cb4579c39cc4e",
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"size": 943
},{
"digest": "sha256:703218c0465075f4425e58fac086e09e1de5c340b12976ab9eb8ad26615c3715",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1201,
"tags": ["v2"],
"annotations": {
"org.opencontainers.image.created": "2001-04-12T23:20:50.52Z"
}
}]
}
Note the second manifest in particular. It is untagged. This is completely unreachable today via the registry API. Also note that it doesn't have a org.opencontainers.image.created
annotation. This is metadata that is sometimes available within an image's config file, but it doesn't make sense for a manifest list, really. If your registry cares enough to index an image's config file, it could expose this here.
If we wanted, we could define more optional annotations for things like push time, pull time, etc. Really anything. Vendor-specific annotations can use their own namespacing. This could also be a place for registries to surface user-specified metadata about artifacts. All of the annotations should be optional.
A registry may choose to populate a ton of annotations, but none should be required for conformance. Mandatory fields are digest
, mediaType
, size
, and tags
.
I've excluded blob descriptors from this list, as I don't think it makes much sense. Garbage Collection around blobs is pretty consistent across registries, and all blobs that are in the registry should be reachable through these manifests (ignoring GC strategies). If anyone thinks it makes sense to have a list of blobs accessible, I'd love to hear why, but I think it's out of scope.
Pagination should work identically to tags listing.
This is the same as from _catalog:
type RepositoryList struct {
Name string `json:"name"`
Repositories []string `json:"repositories"`
}
With the added "name" from TagList
to make parsing things slightly easier for clients.
The only thing we need to do differently from catalog is make this work for a repository and not for a registry. Perhaps a top-level thing could exist, for certain registries, but it certainly shouldn't be required if it doesn't make sense (e.g. for GCR, where everything is namespaced as gcr.io/<your-project-id>/...
).
As an example:
GET /v2/library/repositories/list
{
"name": "library",
"repositories": [
"adminer",
"aerospike",
"alpine",
"alt",
"amazoncorretto",
"amazonlinux",
"arangodb",
"backdrop",
"bash",
"bonita",
"buildpack-deps",
"busybox"
]
}
This should list repositories immediately under "library". For registries that support multiply-nested repositories, like GCR, you should be able to subsequently call GET /v2/library/bash/repositories/list
to list any repositories under "library/bash". Non-nested registries could return 404 or an empty list.
Pagination should work identically to tags listing.
The only downside to this that I could see is that there's no place to stick arbitrary metadata like we would have in descriptors/list. Honestly, I think this is fine, but I'm open to feedback if someone disagrees. I don't mind having something like a RepositoryDescriptor
, but it feels a bit silly. There's no digest for a repo, really (unless you go down the rabbit hole of treating a repo as an index), so this doesn't map cleanly onto existing concepts. Maybe it makes sense to have a top-level Annotations
in a RepositoryList
that just exposes the current repository's metadata, but I don't really think that's necessary. As is, this satisfies the requirement of being able to list all the contents of a registry, which is what I'm mostly aiming to do.
I don't think sorting or filtering are nearly as interesting for repositories as they are for descriptors.
Ah, look at the timing: https://www.docker.com/blog/open-sourcing-the-docker-hub-cli-tool/
I worry a bit that Docker will care less about implementing a standard thing for this.
@jonjohnsonjr Big thumbs up from me. I would lean towards having a RepositoryDescriptor
rather than a list of strings both for future proofing and as a place for vendor extensions to be added without breaking the spec. Types of metadata include things like the star count and pulls in Hub, which would allow them to support the spec in their hub-tool.
Types of metadata include things like the star count and pulls in Hub, which would allow them to support the spec in their hub-tool.
That seems reasonable to me, especially if it unblocks Docker Hub.
I'll read through the detailed post above tomorrow. But I did want to quickly note the docker hub cli. @justincormack and I talked about it a while back. Like all registries hoping for a standard, but needing to provide some tools for their customers, they had to release something to cope with the deletion of content. Basically, the issue behind the throttling and costs conversation. When we talked, this exact point came up. How do we take the time to invest in a common API, so each registry doesn't need to invest their own, and customers have to deal with 6 different registry apis.
@jonjohnsonjr the reason the Hub cli tool is currently a standalone binary not built into Docker cli is the standardisation issue - I talked about this on the OCI call before Christmas. We want to release something that works across registries but its a mess now.
the reason the Hub cli tool is currently a standalone binary not built into Docker cli is the standardisation issue - I talked about this on the OCI call before Christmas.
Oh yeah, I think it's completely reasonable -- don't get me wrong. Before, Docker Hub was in an awkward position where there was no way to do this, so my hope was that y'all would be extra motivated to throw weight behind some standardization here. Now, y'all are just in the same boat as the rest of us :) there are good reasons to have a separate CLI, e.g. to expose Docker Hub specific stuff.
We want to release something that works across registries but its a mess now.
Absolutely agree, which is why I'm pushing on this.
There's this tragedy of the commons where all (most?) registries now have a bespoke way to do the same thing, which is often good enough for a customer, but it hurts the ecosystem. If I want to write a tool that lists stuff in a registry (say, spinnaker), my options are:
That sucks for users, and I think OCI exists to solve this kind of problem.
cc @hdonnay @samuelkarp @bainsy88
Do y'all have any interest in fixing this?
I don't love the other examples because they impose additional storage requirements on registries.
While true, we will need a change to support the requirements of Notary v2. We are focused on two dimensions:
minimum we need: Return a list of descriptors Allow that list to be paginated
This is a great start. I like the paginated list of descriptors, although the notary signature scenario likely calls for a slightly bit more info. I'll capture in the listing api requirements.
Tag info, either: Map from tags to digests (we could add this to /tags/list, maybe?) List of tags on these descriptors -- would need to add a field?
Also great. I think we all struggle for how to provide a history of digests to a given tag. As we move into the gated-mirror scenarios, I expect we'll see customers asking for "rollback" of a tag to a previous digest. When not if an update fails.
We should leave open for extension:
Having a list, which we can all prioritize will surely help. I suspect what might be a lower priority to some, might be a higher priority to others. So, hopefully, we can divide and conquer. Both, for the spec and the reference implementation.
our filesystem can't even list files or directories... this is bad.
Yup, hopefully, we finally have enough "pain" to invest in the gain.
I'd like some feedback from other registry operators.
With the newly formed CNCF distribution/distribution group, I've forwarded the links to these discussions. Hopefully, we'll get more engagement and feedback.
I suspect the Notary v2 work, that must land this year, will be enough of a compelling event to provide feedback, and be a compelling customer need that we'll make good progress. It's not just my hope to leverage Notary to achieve these list goals, rather it's a requirement to meet the Notary v2 requirements.
_catalog
and untagged manifests...
Let's capture these in the requirements. I think all registries have proprietary APIs to handle these scenarios. I'm hopeful we can take our experience for what we like and don't like to make a spec'd api we can all implement.
I already proposed these two additional APIs in #22...
I'm going to do the PM thing and ask we start with a set of requirements. It's really helped defuse the debates over different designs. Rather than argue which design is better, we can argue which designs meet the prioritized requirements, with the right usability.
BTW, I do like the proposal to return a type ManifestDescriptor struct {
. I'd just like to match it to a set of requirements. :)
@sudo-bmitch
a place for vendor extensions
This is a great point. We should also capture the requirements.
Types of metadata include things like the star count and pulls in Hub, which would allow them to support the spec in their hub-tool.
To balance the boil the ocean, I'm actually hopeful
I think OCI exists to solve this kind of problem.
Yuppppppp!!!!!
We can do this…
In the spirit of "no time like the present", and "I've got to run, it's Friday night": Here's a very rough structure to start the conversation: https://github.com/opencontainers/distribution-spec/pull/229
I'm going to do the PM thing and ask we start with a set of requirements.
I don't think requirements are actually useful here. We aren't building a product -- we're trying to achieve consensus on extending a protocol. If that protocol is too burdensome for everyone to implement, we've failed to make any progress. What would be useful is a set of limitations, e.g. what is everyone currently capable of doing or willing to implement? What kind of features would be impossible for some people to implement?
I'm looking for the lowest common denominator that is still useful. Everything I've proposed is already part of the spec, so everyone should be able to implement this unless they've made some very interesting choices in implementation (and I'd really like to hear from them, if so).
I guess we could express some of these limitations as requirements, if that would help you to understand:
I suspect the Notary v2 work, that must land this year, will be enough of a compelling event to provide feedback, and be a compelling customer need that we'll make good progress. It's not just my hope to leverage Notary to achieve these list goals, rather it's a requirement to meet the Notary v2 requirements.
This doesn't make any sense to me. To borrow your metaphor again, you somehow see a dependency between ls
and openssl
? How could openssl
possibly help us implement ls
? I can kind of understand that openssl
might depend on ls
, but only because it's a fundamental, widely adopted, basic building block of POSIX -- not because the ls
authors shoehorned signatures into the filesystem. If notary somehow depends on these listing APIs, that's fine, but these listing APIs need to make sense in a world where notary v2 does not exist, e.g. the current reality.
Here's a very rough structure to start the conversation: #229 Provide filtering by
artifactType
I don't understand how this can be a requirement when it doesn't exist?
@jonjohnsonjr More than happy to be involved in this.
I think I agree that as a first pass just having standardisation around the simple operations of listing repositories and a list of descriptors per repo. Also completely get the need to open this up for extension.
I guess we could express some of these limitations as requirements, if that would help you to understand:
- Must not require complex logic, i.e. a static filesystem implementation should be able to implement this.
- Must not require storage of additional data, i.e. we only expose information that is already necessary to implement the distribution spec.
I think the above is really important, this should work with a vanilla Registry. If you take distribution running on S3, that is really not geared up for performant listing. Having the meta data in object storage really limits how quickly these list can be produced. It will be challenging to make that performant regardless so adding additional complexity will add to the pain.
Storing additional data would also be introduce the problem that you would need a process that goes and adds all the new information to storage for all existing repos/images.
Having the meta data in object storage really limits how quickly these list can be produced. It will be challenging to make that performant regardless so adding additional complexity will add to the pain.
Yes this is exactly what I'm thinking about. If your registry implementation happens to have a backend that can be easily paginated, sorted, or filtered, then of course these additional features would be useful (and likely spare you some cycles), but requiring these features is burdensome on implementations that don't have their metadata store set up for this already. I don't think Docker Hub supports tag pagination, even today.
If we have any requirements that aren't trivially implementable by existing registries, this will either not get implemented or take years to roll out because registries will need to perform migrations, backfills, etc. Another example is quay's support of schema 2 images. As I understand it, adding support wasn't particularly difficult, but the backfill process took a long time.
Do we see edge-level infrastructure and root-level infrastructure implementing the same APIs?
Do we see edge-level infrastructure and root-level infrastructure implementing the same APIs?
I'm not sure what kind of topology you have in mind, but I think it's reasonable to expect anything that implements the tag listing API should implement these APIs as well.
If clients or caches wanted to expose something similar to this, I think it's a nice way to discover content (e.g. should be compatible with index.json
in an image layout), but I'm only thinking about this in the context of content discovery as a registry client.
I'm looking for the lowest common denominator that is still useful
How can we define and agree on useful? We can title the PR anything you'd like. But, if we can agree on what we're trying to solve, we can then have an actionable conversation on how we're solving it.
a set of limitations
We can definitely add this to the PR. This was one of the issues with the _catalog
API. Not all registry operators could implement the auth at the root. We did a similar requirement for Notary v2, where the spec must support external key management solutions. We also captured the vendor's ability to extend the registry list APIs. This would/should allow registry operators to move their existing capabilities, which may be unique, to a shared API.
notary v2 - requirements
As registry operators and product owners, there are lots of great features we'd all like to add. For the listing API, we've each added APIs to unblock our customers. So, it's hard to justify a new listing API, when we have so many other top-priority requirements. What I'm suggesting is Notary v2 has strong business justification from most, if not all registry operators and products. While we might have been able to create a more focused solution, just for signing, we're taking a more generic approach so that we can support the signing of images and all artifact types. Including reference types like an SBoM, Singularity, WASM, Helm, Nydus and others. By bundling the listing API, which is a pri-0 requirement for the end to end scenarios, we can likely deliver a common solution that meets all our needs. Notary v2 is based on cross registry integration. Kinda hard to do that if we don't have a common listing API.
Here's a very rough structure to start the conversation: #229 Provide filtering by
artifactType
I don't understand how this can be a requirement when it doesn't exist?
The artifactType
= manifest.config.mediaType
for image-manifest. For oci.artifact.manifest
, we're just proposing lifting it from a buried attribute to a first class attribute. I'll be making some additional adjustments to the proposal, based on feedback from @dmcgowan and some others to shim in annotations
, providing the additional flexibility to pull a specific signature from the collection of artifactType=cncf.notary.signature.v2
Must not require complex logic, i.e. a static filesystem implementation should be able to implement this.
I totally get this one. Changing data storage, adding indexing is a major change. Somewhere we had a reference to minimizing storage changes. However, the need to support adding multiple signatures without changing the digest and/or tag of the artifact being signed meant we needed to add a reverse lookup (reference) model. But, this is where the business need will drive the priority to get it backlogged.
Now, in comparison, supporting Notary v1 is quite complex, and doesn't meet the needs, so we actually think the net work is smaller than it could be
Do we see edge-level infrastructure and root-level infrastructure implementing the same APIs?
@stevvooe Are you referring to on-prem, or IoT scenarios?
How can we define and agree on useful?
By having this discussion. I can PR my proposal, if you'd rather do it on a PR, but it's all just markdown and comments in the end.
But, if we can agree on what we're trying to solve, we can then have an actionable conversation on how we're solving it.
I think it's somewhat obvious -- there's a big /v2/_catalog
-shaped hole in the registry API, and a big ManifestDescriptor
-shaped hole next to it. I can spell this out more in a PR, but I'm trying to finish a clearly incomplete thing, not add something novel.
I'm happy to make concessions to support other use cases, but I think it's pretty obvious what needs to be done, and I care very little about the exact implementation details. If you're telling me that ACR will not implement repo or manifest listing APIs unless we merge the artifacts and notary stuff first, I'll be pretty frustrated, but that's exactly the kind of feedback I'm looking for.
By bundling the listing API, which is a pri-0 requirement for the end to end scenarios, we can likely deliver a common solution that meets all our needs.
There doesn't seem to be any mention of listing currently. Are you saying you'd like to add a rider to the notary stuff that registries must implement a listing API to be compliant? Or is this already a P0 requirement that I'm missing?
I think that's fine, as long as we converge on something, but I don't want to block this on notary requirements because this proposal would benefit registries and clients that can't or won't implement notary.
we're just proposing lifting it from a buried attribute to a first class attribute
If it's a requirement you've now either violated the "no additional storage" requirement (which requires a backfill) or require implementations to read and parse every single manifest to retrieve this field, which would be a performance nightmare.
... to shim in
annotations
Giving registries the option to surface top-level annotations via the ManifestDescriptor as I've proposed should satisfy this use-case, right?
However, the need to support adding multiple signatures without changing the digest and/or tag of the artifact being signed meant we needed to add a reverse lookup (reference) model.
I don't agree with this -- that's just one possible implementation that satisfies the requirements.
Adding a reverse lookup for metadata is an interesting proposal. Adding weak references to the content model is also an interesting proposal. I'd like to see the semantics of both of those things defined and consider the consequences of making a breaking change to the image-spec before passing any judgement. "Notary needs this" is not really a convincing argument for breaking every client and registry on the planet to me, personally.
But, this is where the business need will drive the priority to get it backlogged.
What does this mean?
Now, in comparison, supporting Notary v1 is quite complex, and doesn't meet the needs, so we actually think the net work is smaller than it could be
Based on what you've said above, it seems that Notary v2 is also quite complex?
Much of this might be easier for discussion on the OCI call, to which I see you've added an agenda item. I'll reply here for the breadth of conversation, and those that can't attend the call. The overall gist is we really, really want to land this. We just don't want to ship yet another API that doesn't get implemented because, ...we didn't capture the needs...
I think it's somewhat obvious
Yup, I get it. There are a few requests trying to converge. Rather than run parallel requests on the same thing, I'm just asking we step back and assure we're capturing all the known things so we can get adoption and spec-it, code-it, and have all the registries ship-it.
If you're telling me that ACR will not implement repo or manifest listing APIs unless we merge the artifacts and notary stuff first, I'll be pretty frustrated, but that's exactly the kind of feedback I'm looking for.
There doesn't seem to be any mention of listing currently. Are you saying you'd like to add a rider to the notary stuff that registries must implement a listing API to be compliant? Or is this already a P0 requirement that I'm missing?
It's not that we don't want to incorporate this. I think all registries would like a standard. On the CNCF distribution call, we briefly discussed this as well. The CI/CD vendors are in an even more difficult space as they need to support multiple registries. This means each must write different code paths for each registry. Rather than assume we know all the use-cases, let's capture them first. As noted above, we and other vendors feel we must ship a Notary v2 solution in the coming months. It must have a discovery API to support the ability to push, discover and pull signatures. There's a PR in the staged notary/distribution repo that has a very rough prototype for it. However, I would not look deeply as we have a newer prototype that uses the OCI Artifact Manifest that's also work-in-progress and not ready for review as we're still iterating ourselves. We're using the prototypes to validate the specs and scenarios.
I don't want to block this on notary requirements because this proposal would benefit registries and clients that can't or won't implement notary.
The approach we're taking is to explicitly design these as independent capabilities that enable a breadth of scenarios, including Notary. If we've done the design correctly, implementing a few, but important, changes will enable a wide range of scenarios.
To your other point, it would be great to know what would block a registry from wanting to implement these. We believe all registries need an artifact signing solution and having a common spec so content can move within and across all OCI conformant registries is the customer need we must meet. It just so happens we need discovery APIs to complete the experience, so we should get the listing API as a result.
If it's a requirement you've now either violated the "no additional storage" requirement (which requires a backfill) or require implementations to read and parse every single manifest to retrieve this field, which would be a performance nightmare.
Backfill is a huge issue. It will take some more thought process, but I believe we have a design that says only new artifacts that reference existing artifacts will use this new storage/indexing requirement. So, no backfill would be required.
Adding a reverse lookup for metadata is an interesting proposal. Adding weak references to the content model is also an interesting proposal. I'd like to see the semantics of both of those things defined and consider the consequences of making a breaking change to the image-spec before passing any judgement. "Notary needs this" is not really a convincing argument for breaking every client and registry on the planet to me, personally.
The design specifically calls out not changing the image.manifest
or image.index
, so we're being super careful to not break anything. It will be an add, not a break. If the registry or client doesn't have the capability, they simply wouldn't get notary v2 benefits. This enables the toolchains to opt-in over time and based on business needs.
But, this is where the business need will drive the priority to get it backlogged. What does this mean?
As much as we'd like to add better ways to do the same capabilities, we're all over-committed to delivering new or enhanced capabilities to our customers. All registries have proprietary listing apis. So, technically, we're not blocked. CI/CD vendors have a fun time dealing with all the differences, but they're not blocked either. So, from a business need, it's just difficult to prioritize a refactoring. Particularly if the new API doesn't account for the behavior we all implement in our proprietary APIs. Thus, the need for vendor extensibility. I'll add more to the requirements later today/tomorrow to call this out.
We are blocked in delivering a standard artifact signing solution that spans the various cloud and registry providers. This is something we're all hearing from our customers and it will get prioritized.
Based on what you've said above, it seems that Notary v2 is also quite complex?
There will be a few key changes. But, they won't be unique to Notary, and we believe they will be far easier than implementing Notary v1, which doesn't satisfy the requirements of content signing. So, it's not as complex, and provides far more benefit. But, that's still for us to finish identifying so it's more obvious.
We just don't want to ship yet another API that doesn't get implemented because, ...we didn't capture the needs...
Sure, here's the need, IMO:
All manifests in a registry should be discoverable through the registry API.
I would be satisfied with just a list of digests for manifests and strings for repositories, but we can obviously do better than that.
but I believe we have a design that says only new artifacts that reference existing artifacts will use this new storage/indexing requirement. So, no backfill would be required.
This really tunnel-visions on the problem. It is much simpler to make these things optional than to codify implementation details like this.
All registries have proprietary listing apis. So, technically, we're not blocked. CI/CD vendors have a fun time dealing with all the differences, but they're not blocked either.
How could you possibly know this? It's also not true. I don't think it's fair to ignore registry implementations that aren't from giant cloud vendors.
So, from a business need, it's just difficult to prioritize a refactoring. Particularly if the new API doesn't account for the behavior we all implement in our proprietary APIs. Thus, the need for vendor extensibility. I'll add more to the requirements later today/tomorrow to call this out.
Sure, but we should also consider the difficulty of the implementation. Adding support for your complex notary and artifacts proposals is a huge undertaking, whereas these listing proposals are dead simple and shouldn't (my hope) be difficult for any registry to satisfy, unless we keep attaching pork fat to it.
Revisiting this after OCI-Artifact #29. Over there, we are proposing an API to query artifacts by their connection to existing manifests, including filtering by artifactType
. It seems like a hole in the spec if we add those API's for Artifacts connected to other digests, but don't have a higher level manifest list API that supports filtering on artifactType
.
I'm not sure of the timing between the 1.0 Distribution and Artifact specs, but in my ideal world we'd see both go GA this year, and so it would be useful to include the artifactType
filter in the manifest list API. If it doesn't make sense to add that now, while Artifact is still working through their design process, then a placeholder for where it would go in a future release and defining expected behavior for registries that don't support the filter, would help.
I'm fairly opposed to adding artifactType
anywhere, but assuming that lands, I think having artifactType
-specific filtering ability isn't really the right approach. I'd prefer a generic way to filter by arbitrary properties, such that you could filter by annotations, artifactType
, etc.
I'd also not want to make filtering mandatory if we can avoid it. It's convenient but also entirely possible to implement client-side.
I'm fairly opposed to adding
artifactType
anywhere, but assuming that lands, I think havingartifactType
-specific filtering ability isn't really the right approach. I'd prefer a generic way to filter by arbitrary properties, such that you could filter by annotations,artifactType
, etc.
I like the idea of flexibility, but defer this decision to the registry operators that have to implement this at scale.
I'd also not want to make filtering mandatory if we can avoid it. It's convenient but also entirely possible to implement client-side.
Agreed. That follows along with many of the other registry APIs, like pagination.
Sure, here's the need, IMO: All manifests in a registry should be discoverable through the registry API.
Is this captured well enough here: 3. A user can get a list of manifests, within a given registry/namespace.
but I believe we have a design that says only new artifacts that reference existing artifacts will use this new storage/indexing requirement. So, no backfill would be required. This really tunnel-visions on the problem. It is much simpler to make these things optional than to codify implementation details like this.
The clarification above was related to the oci.artifact.manifest
proposal, for providing a link to an existing artifact in a registry. That particular design doesn't require backfill, as it's a new artifact.
Your comment on optional and codifying an implementation is more relevant to this larger conversation.
There are two approaches:
1 above is somewhat moot, as we can't agree that push should be core to a registry :(. So, we're basically saying most things are optional to an implementation of the distribution-spec. So, regardless of what gets added to distribution-spec, we'll need a definition for non-supported behavior: HTTP 501 for spec features a registry doesn't support
I've queued up a conversation in our next call to discuss how we proceed with distribution-spec features and extensions.
This is probably not the right place to leave this comment, but I haven't found a better one. Suggestions as to better venues are welcome.
What I would find really useful, as a maintainer of base images that other teams derive from in order to build release artifacts, is the moral equivalent of docker images --filter "label=com.example.version=1.0"
spanning an entire registry (modulo what's visible to the authenticated account that's asking). Maybe that risks becoming "child of _catalog
"; but in its absence, I have no good way of tracing a defective base image forward to the artifacts based on it.
I'm aware that there are lots of other things that one might want to index on and filter by. But this is, at least, not an abstract need, nor is it a special case of "find things I can garbage collect to reduce my spend". Labels are, in practice, the user-defined metadata that gets propagated along a sequence of derivation steps. (Assuming you don't clobber them; I think this is where Label Schema and OCI Annotations went wrong, by trying to assign label names that describe the "final" image, and therefore have to be clobbered in every Dockerfile that layers in more tools or configurations. But that's a problem with conventional usage patterns, not with the tooling.) In any case, they're a reasonable thing to want to apply equality / set membership filters to.
One might hope to use the same syntax as Kubernetes label selectors (https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors), which at least one major cloud vendor seems to have decided are the only list filtering mechanism worth implementing at scale (https://cloud.google.com/run/docs/reference/rest/v1/namespaces.revisions/list).
So, yeah. I'd be pretty happy with GET /v2/digests?labelSelector=foo%3Dbar,baz%3Dquux
. If you want to give me a little bit more than <registry>/<repository>@sha256:<digest>
for each image, I'd take the creation date and the full set of labels on the image. Anything more than that is gilding the lily.
+1 this one to bubble up in folks' queues :)
This is pulling out some comments made in #22 and a bit related to #114 . It would be useful to have some kind of
/v2/<name>/manifests/list
API similar to what we have for tags today. Returned from that should be all manifest digests within that repository.This can be useful for building user scripts to implement the GC policy outside of the registry, looking for dangling manifests that do not have any tag pointing to them, and calling the manifest delete API when the user defined criteria is met. For example, a user script could examine the manifest and referenced config, looking for labels indicating the image was part of a nightly build, and remove any manifests pointing to a 2 week or older build.