goharbor / harbor

An open source trusted cloud native registry project that stores, signs, and scans content.
https://goharbor.io
Apache License 2.0
23.02k stars 4.65k forks source link

Migrate Harbor instance from Filesystem/Block storage to Object Storage #18843

Open lusu007 opened 1 year ago

lusu007 commented 1 year ago

I currently have a Harbor instance running in my Kubernetes cluster, and I'm interested in transitioning the storage backend from block storage to object storage (specifically, S3). I'm wondering if there's an easy and straightforward way or a specific feature within Harbor that can facilitate this migration process to S3.

stonezdj commented 1 year ago

You could do it by replication.

CoderTH commented 1 year ago

You could do it by replication.

We have the same problem. If replication is used, configurations related to mirror retention will be lost and need to be manually configured again (just one of them). What I want to do is migrate the entire harbor to another cluster and change the storage to s3. Or can replication solve this problem as well?

lusu007 commented 1 year ago

Luckily, I successfully replicated the images from my previous registry, allowing me to fully redeploy Harbor and replicate all the data once more.

Implementing an automated migration to different storage types would greatly enhance the practicality of the process.

CoderTH commented 1 year ago

Luckily, I successfully replicated the images from my previous registry, allowing me to fully redeploy Harbor and replicate all the data once more.

Implementing an automated migration to different storage types would greatly enhance the practicality of the process.

Is it convenient to ask how you operate, can the data other than the mirror be migrated?

lusu007 commented 1 year ago

Is it convenient to ask how you operate, can the data other than the mirror be migrated?

I don't know if I understand your question correctly, but I'll try to answer it.

I deployed Harbor via ArgoCD into my production cluster using the Bitnami Helm Chart. A migration from Harbor directly would only be possible if I were to deploy another instance of Harbor at the same time. Then I could utilize the replication feature of Harbor.

However, I believe a storage migration should be possible without deploying a second Harbor instance...

stonezdj commented 1 year ago

You could do it by replication.

We have the same problem. If replication is used, configurations related to mirror retention will be lost and need to be manually configured again (just one of them). What I want to do is migrate the entire harbor to another cluster and change the storage to s3. Or can replication solve this problem as well?

the configuration and retention policy is not replicated. you need to setup manually

CoderTH commented 1 year ago

You could do it by replication.

We have the same problem. If replication is used, configurations related to mirror retention will be lost and need to be manually configured again (just one of them). What I want to do is migrate the entire harbor to another cluster and change the storage to s3. Or can replication solve this problem as well?

the configuration and retention policy is not replicated. you need to setup manually

The configuration related to image retention can be manually configured, but some configuration items are not easy to do, for example, we now have a lot of robot accounts used by a lot of ci cd pipelines, if the migration is carried out in this way, these accounts need to be manually set, which is a big cost

CoderTH commented 1 year ago

You could do it by replication.

We have the same problem. If replication is used, configurations related to mirror retention will be lost and need to be manually configured again (just one of them). What I want to do is migrate the entire harbor to another cluster and change the storage to s3. Or can replication solve this problem as well?

the configuration and retention policy is not replicated. you need to setup manually

Is there a solution to move the harbor completely? Has anyone done this before?

pkalemba commented 12 months ago

We have all projects /replications /registry /retention configs in terrafrom code . So change of instance is not a big pain.

lusu007 commented 12 months ago

We have all projects /replications /registry /retention configs in terrafrom code . So change of instance is not a big pain.

I have effectively configured my settings using Terraform. However, the challenge lies not in the configuration migration, but rather in the migration of existing images. Currently, it is not feasible to perform a storage migration without creating a new instance and replicating the existing one. Nevertheless, I believe there should be a way to switch storage without necessitating the creation of a new instance.

github-actions[bot] commented 10 months ago

This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.

lusu007 commented 10 months ago

This isn't stale.

github-actions[bot] commented 8 months ago

This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.

lusu007 commented 8 months ago

This isn't stale.

github-actions[bot] commented 6 months ago

This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.

lusu007 commented 6 months ago

This issue isn't stale. Is it possible to add a label to prevent further stale markings? @stonezdj

github-actions[bot] commented 3 months ago

This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.

lusu007 commented 3 months ago

This issue isn't stale. Is it possible to add a label to prevent further stale markings? @stonezdj

github-actions[bot] commented 1 month ago

This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.

lusu007 commented 1 month ago

This issue isn't stale. Is it possible to add a label to prevent further stale markings? @stonezdj