openshift / oc-mirror

Lifecycle manager for internet-disconnected OpenShift environments
Apache License 2.0
91 stars 82 forks source link

oc-mirror unexpectedly deletes images from the registry (or from the generated index if we use the --skip-pruning option to avoid deletion of images from the registry) #693

Open sdesousa86 opened 1 year ago

sdesousa86 commented 1 year ago

Version

$ oc-mirror version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"4.13.0-202307242035.p0.gf11a900.assembly.stream-f11a900", GitCommit:"f11a9001caad8fe146c73baf2acc38ddcf3642b5", GitTreeState:"clean", BuildDate:"2023-07-24T21:25:46Z", GoVersion:"go1.19.10 X:strictfipsruntime", Compiler:"gc", Platform:"linux/amd64"}

What happened?

Hello,

We are using oc-mirror to create a 2 .tar archives with a set of operators (with a well defined version for each one), and we are facing an issue when we are trying to upload the second archive on our private registry.

imageset-config-01.txt imageset-config-02.txt

Both imageset files are almost identical (apart from the addition of the openshift-gitops-operator operator)

There is no issue when we are generating the archives, the first archive upload is working well, we are able to install and configure our OCP cluster thanks to the mirrored images, but when we are trying to upload the second archive (and so, trying to add the openshift-gitops-operator operator to our cluster OperatorHub), the oc-mirror command line raises multiple VIOLATE_FOREIGN_KEY_CONSTRAINT: the deleting artifact is referenced by others and NOT_FOUND: artifact... errors and crash.

N.B.: We also observed that the oc-mirror command is pruning some images before raising theses errors

oc-mirror-error-on-upload-when-skip-pruning-not-used.log

We performed a second test, using the "--skip-pruning" option for the second upload command. This time the oc-mirror command did not crash and we were able to add the openshift-gitops-operator operator to our OCP OperatorHub. Unfortunately, while comparing the images indexes of the redhat-operator-index, we observed that images, that were not deleted from the registry thanks to the --skip-pruning option, have been removed from the index. indexes-dump-1.log indexes-dump-2.log

What did you expect to happen?

As we are defining a specific version for each operator in our imageset files, and we are simply adding a new operator to the list of operators to mirror (There is no version update on the already listed operators), oc-mirror should not prune images from the registry, just add images that are required by the newly added operator, and not crash.

Furthermore, no images should be deleted from the index (unlike what we observed when the --skip-pruning option was used to overcome the problems encountered in the first test).

How to reproduce it (as minimally and precisely as possible)?

Generate .tar archives thanks to the imageset

# Generate first .tar file
$ oc mirror --config=imageset-config-01.yaml file:///local/oc-mirror/working-dir

# Generate second .tar file
$ oc mirror --config=imageset-config-02.yaml file:///local/oc-mirror/working-dir

First test (without --skip-pruning option)

# Upload 1rst .tar file on the private registry
$ oc mirror --from=./mirror_seq1_000000.tar docker://harbor.dtitls2.dsna.cloud/oc-mirror

# Try to upload second .tar on the private registry
$ oc mirror --from=./mirror_seq2_000000.tar docker://harbor.dtitls2.dsna.cloud/oc-mirror

=> ERROR

Second test (with --skip-pruning option)

# Upload 1rst .tar file on the private registry
$ oc mirror --from=./mirror_seq1_000000.tar docker://harbor.dtitls2.dsna.cloud/oc-mirror

# Try to upload second .tar on the private registry
$ oc mirror --from=./mirror_seq2_000000.tar docker://harbor.dtitls2.dsna.cloud/oc-mirror --skip-pruning

=> Command successful, but images removed from index.

Anything else we need to know?

N/A

References

N/A

dadav commented 12 months ago

We also have this problem. Our graph-data images get deleted, even though we use --skip-pruning in our automation script..

openshift-bot commented 9 months ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

dadav commented 8 months ago

/remove-lifecycle stale

openshift-bot commented 5 months ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 4 months ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

dadav commented 4 months ago

/remove-lifecycle rotten

openshift-bot commented 1 month ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 3 weeks ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale