Open jlsherrill opened 2 weeks ago
pulp rpm content upload
does exactly that.
pulp rpm content upload
does exactly that.
Concur:
$ pulp rpm content upload --help
Usage: pulp rpm content upload [OPTIONS]
Create a content unit by uploading a file
Options:
--chunk-size TEXT Chunk size to break up rpm package into. Defaults to
1MB
--relative-path TEXT Relative path within a distribution of the entity
--file FILENAME An RPM binary [required]
--repository TEXT Repository to add the content to in the form
'[[<plugin>:]<resource_type>:]<name>' or by href.
--help Show this message and exit.
oh great! I missed that in the docs (maybe i overlooked it?)
Then i guess the only thing missing is the ability to upload multiple rpms rather than just one?
The big difference between this and just running the command multiple times is that if you were to run this 100 times, you would get 100 new repo versions
if you want to add 100 files as an atomic operation, using a file: repo and syncing them is going to be more efficient. I'm not sure how the REST-API would handle trying to stream, say, 100 files in one enormous request.
Making and removing versions is pretty straightforward - it's publishing that can take time, in pulp_rpm.
Doesn't it use chunking to upload chunks of each file? Katello's cli supports this (also via chunking).
Yes making/removing versions is very straight forward, but if i need to tell a user how to upload a local set of rpms to a server, a single command versus run this command 100 times in a bash loop and then go delete 99 repo versions isn't ideal.
This is all about usability.
Hm, ok. So if we had something like --directory=./foo --cleanup-versions True
, that (in the CLI process) call the pulp_rpm/restapi.html#tag/Content:-Packages/operation/content_rpm_packages_create
API in a loop for all-directory-contents-ending-in-.rpm, remembering "current version at start" and cleaning up all interim-versions for the user - that would do the deed, yeah?
Could add a --dry-run as well, to output "this will add the following RPMs, taking up N GB, to repository foo"
oh the https://docs.pulpproject.org/pulp_rpm/restapi.html#tag/Content:-Packages/operation/content_rpm_packages_create api doesnt' support chunking, thats not good. There are limits to buffer sizes on web servers that prevent rpms from being uploaded in a lot of cases. On satellite for example you HAVE to use a chunked api in order to upload anything over ~5 mb which is pretty small.
I don't think that api is the right one to use in this case (or any case for the actual file upload IMO)
forgot to add: if you want to create 100 versions and delete 99 of them, that would be okay, but the apis today support just creating 1 version with the 100 rpms i thought, so that seems like the simpler option?
The problem we have here, is "orphan-upload" (ie not directly into a repository) collides really badly with RBAC and orphan-cleanup and "who owns Artifacts" when they are de-duplicated entities. We're going to have think pretty hard about how to address this.
Upload absolutely supports chunking. (I never remember the versions when something was added...) And the CLI handles the Upload object transparently for the user.
At this point I'd say: Create a temporary repository with retain_version=1
, upload all the packages there, and then move them all to the destination repository in one go, clean up the tmp repo. And that is a process the cli can do as a high level workflow.
I don't quite understand the purpose of using a temporary repository to host the units? You can already add them to a repository here: https://pulpproject.org/pulp_rpm/restapi/#tag/Repositories:-Rpm/operation/repositories_rpm_rpm_modify
The benefits are: You only create a single new repository version (on the target repository), while never needing to deal with orphans. The whole idea of users needing to create orphaned content prior to using is not sane in a multi user environment.
Could you explain why its not sane?
Orphans can fall victim to orphan cleanup basically at any time. At the same time because content is shared (and therefore must be immutable) it cannot be owned by someone. The only way to prevent it being orphan is to put it in a repository (usually owned by someone). There has been a lot of discussions in the past and i found at least some here (at different states of implementation): https://discourse.pulpproject.org/t/multi-tennancy-and-the-question-of-owning-content/237 https://discourse.pulpproject.org/t/pulp-rpm-content-create-as-non-admin-user/1216 https://discourse.pulpproject.org/t/normal-user-got-permission-problem-when-uploading-big-rpm-file-as-artifact/1023
I thought orphan cleanup only cleaned up units that were older than orphan_protection_time ? As long as the user uses a reasonable orphan_protection_time this shouldn't be an issue?
You can always provide the time with the call and that can be zero. Because "I really need this artifact to leave the system now." But in the end the whole idea of letting users care about orphans could have been avoided, because Pulp should be about "Content in Repositories" instead of "Repositories and Content".
To me that is a big edge case and one that is a problem with the api/server, not one that should be solved via a cli workflow. I could see a server side setting for minimum protection time, but to me this is not the correct place to solve this problem.
This threads remind me of one of first ideas I had for pulp-cli: What if we could write a recipe and use it as a parameter for the CLI. Like:
Could something like: "Upload all rpm packages from $PWD" help you somehow @jlsherrill, or do you need this "sugar" on the API level?
I believe that users would benefit from being able to just run
pulp rpm content upload --dir $PWD --repository my-repo
and have all the RPMs added to the the repository in one repository version.
Was playing with the existing functionality today; let me record here for posterity.
wget -r -np -R "index.html*" https://fixtures.pulpproject.org/rpm-signed/
cd fixtures.pulpproject.org/rpm-signed
pulp rpm repository create --name uploaded --retain-repo-versions 1
pulp rpm distribution create --name uploaded --repository uploaded --base-path uploaded
for f in *.rpm; do echo $f ; pulp rpm content -t package upload --repository rpm:rpm:uploaded --file "${f}" ; done
pulp rpm publication create --repository uploaded
wget http://localhost:5001/pulp/content/uploaded/
We discussed at pulpcore meeting today. Here's my summary of what I learned and what I think the plan is:
Do we want to build this CLI functionality without rethinking any server-side APIs? Yes
Therefore the only viable path is to have the CLI:
foo
directory to that repocopy
(rpm only API) or modify
(all plugins generic API) to copy everything into the destination repoAlso here's some info on the desired timeline and usage.
RPM
is all we really need for now@pulp/core FYI
After more discussion with @jlsherrill the issue with this plan is architecturally the end user performing the upload can't make calls to create or delete repos. I'm going to schedule us a 30 minute call to talk over some options, hopefully that's helpful.
@pulp/core
Well that's...a shame. Um, I have a working POC - but it absolutely requires the ability to create a repository under the invoking user's credentials.
For reference, here's example output from the current approach:
(oci-env) (994_directory_upload) ~/github/Pulp3/pulp-cli $ pulp rpm repository create --name upload
(oci-env) (994_directory_upload) ~/github/Pulp3/pulp-cli $ pulp rpm content upload --directory ./signed/fixtures.pulpproject.org/rpm-signed/ --repository rpm:rpm:upload
About to upload 35 files for upload into tmp-repository uploadtmp_upload_de488023-3e4c-4e4d-878f-fd9d76fc1f72
Started background task /pulp/api/v3/tasks/019050cd-2c15-7b7c-bf9a-6d985fd7827b/
Done.
Uploaded ./signed/fixtures.pulpproject.org/rpm-signed/crow-0.8-1.noarch.rpm...
Started background task /pulp/api/v3/tasks/019050cd-303a-73df-8324-adb8b96891a1/
Done.
Uploaded ./signed/fixtures.pulpproject.org/rpm-signed/cheetah-1.25.3-5.noarch.rpm...
[lots of uploads happen...]
Started background task /pulp/api/v3/tasks/019050cd-d0f0-75a9-813e-11d4787b2ab0/
Done.
Uploaded ./signed/fixtures.pulpproject.org/rpm-signed/camel-0.1-1.noarch.rpm...
Creating new version of repository upload
Started background task /pulp/api/v3/tasks/019050cd-d653-7bde-a5f5-b5be967a6b7d/
Done.
Created new version in upload: /pulp/api/v3/repositories/rpm/rpm/019050cd-1490-7e65-94e3-616225428889/versions/1/
Removing tmp-repository uploadtmp_upload_de488023-3e4c-4e4d-878f-fd9d76fc1f72
Started background task /pulp/api/v3/tasks/019050cd-d90f-7b9f-b2c5-f4e08be8fe27/
Done.
(oci-env) (994_directory_upload) ~/github/Pulp3/pulp-cli $
With the exception of the detail that the CLI is responsible for doing these things, we had this same discussion with COPR and came to the same conclusion - that if you're constructing a new release, you want to upload your RPMs into some temporary holding area (a repository) and then transfer all of them into the main repository in one operation later on.
@dralley Yeah, the current state of the PR is "pretty close" to what COPR is asking for. However, there's no (current) way to get here starting from "a user who can upload content but is not allowed to create repositories". If there existed a subclass of RpmRepository, something like RpmTmpRepository, that could have its own set of RBAC controls, that might work? An RpmTmpRepository would be a limited version of RpmRepository - no remote allowed, can't be sync'd, retain-versions==1, refuses to be published or distributed (however that would be be implemented). Would be great if it could be automatically reaped, eventually? Is it obvious I am making this up?
That is, of course, a significantly new kind-of Thing, and A LOT more work than #1000 is. And getting it working, tested, and released in a few weeks, with a major holiday between now and then, would be...exciting.
Then give the user repo-creation... Would this be such a bad idea? Possible solutions (server side):
Why would it be content agnostic? In many cases (including COPR's specifically) the temporary repository should be publishable on its own so that it would be possible to test the packages before making them live.
That requires that it be a standard typed repository, and in any case an untyped repository would be something completely new and different from what currently exists.
Proposal:
More discussion incoming tomorrow, will add minutes here.
Why would it be content agnostic? In many cases (including COPR's specifically) the temporary repository should be publishable on its own so that it would be possible to test the packages before making them live.
That requires that it be a standard typed repository, and in any case an untyped repository would be something completely new and different from what currently exists.
What you describe here is a completely different workflow, because here the "temporary repository" would not be an intermediary asset of the single cli call which it will delete in the end, but a created artifact you want to use. So in that case, you'd need to build the workflow on a higher level. Creating the temporary repository, filling it and then moving it's content to the final destination is no longer an atomic step.
More discussion happened today. Some notes:
Today's conclusions:
"ggainey to provide list of REST calls" - see https://github.com/pulp/pulp-cli/pull/1003#issuecomment-2197191093
One more thought: pulp_container implemented pending_blobs and pending_manifests in order to allow podman push
to first push a bunch of blobs, then push a manifest needing them and (optionally) combine manifests in a manifest list after that. And only the last object gets a tag and is added (with all its dependents) to the repository. Maybe we can do something similar here.
This basically is content that belongs to a repository (solves the rbac side) but is not part of it yet (don't collect garbage, create single new version on an event).
Summary
Today with pulp-cli, if a user wants to upload 3 rpms, they have to run:
pulp artifact upload --file "$PKG1" pulp artifact upload --file "$PKG2" pulp artifact upload --file "$PKG3"
Then for each of those you have to grab the href of each artifact and fetch the sha256:
pulp show --href "${ARTIFACT_HREF_1}" | jq -r '.sha256') pulp show --href "${ARTIFACT_HREF_2}" | jq -r '.sha256') pulp show --href "${ARTIFACT_HREF_3}" | jq -r '.sha256'
Then for each of those, they have to use the sha256 to create a content unit and grab the unit href:
PACKAGE_HREF1=$(pulp rpm content create --sha256 "${ARTIFACT_SHA256}" | jq -r '.pulp_href') PACKAGE_HREF2=$(pulp rpm content create --sha256 "${ARTIFACT_SHA256}" | jq -r '.pulp_href') PACKAGE_HREF3=$(pulp rpm content create --sha256 "${ARTIFACT_SHA256}" | jq -r '.pulp_href')
Finally you can add these to a repository:
TASK_HREF=$(pulp rpm repository content modify \ --repository "${REPO_NAME}" \ --add-content "[{\"pulp_href\": \"${PACKAGE_HREF1}\"}, {\"pulp_href\": \"${PACKAGE_HREF2}\"}, {\"pulp_href\": \"${PACKAGE_HREF3}\"}]" \ 2>&1 >/dev/null | awk '{print $4}')
This is a LOT of commands to run for a user and there isn't an easy way for me to easily give a couple commands to a user to run to upload an arbitrary list of rpms. Having a command that handled everything and simply took in a list of rpms and outputted a repo version would be much simpler.
An example of what i'm thinking of:
This command would need to handle things such as:
Examples