Open kmai opened 4 years ago
Could you please open issue in https://github.com/goharbor/harbor-helm ?
This doesn't have to do with helm. This is the binary not supporting this CredentialProvider.
@kmai Had the same issue here using AWS IRSA. As far as I could notice, Harbor helm chart makes Registry service use an internal credential to authenticate to AWS and it causes to access denied error: https://github.com/goharbor/harbor-helm/blob/master/values.yaml#L480
We ended up using access/key to authenticate against AWS even though it does not solve the issue. Hope it helps to not get stuck at same issue as me.
Unfortunelty AWS users are currently forced to create an IAM user and inject sercret and access key instead of using the preffered approuach of IRSA. The harbor registry does not support AssumeRoleWithWebIdentity at the moment.
Is there any chance to reopen this or should I create a new one? @stonezdj
Here is also an issue about it: https://github.com/goharbor/harbor-helm/issues/725
How did you solve this? I have the same problem reported by @rsilva-nk. In the values.yaml, Do I add the block config from Registry and in credentials I use the aws accesskey and secretkey ?
Like:
registry:
serviceAccountName: ""
registry:
...
...
...
credentials:
username: "accesskey"
password: "secretkey"
How did you solve this? I have the same problem reported by @rsilva-nk. In the values.yaml, Do I add the block config from Registry and in credentials I use the aws accesskey and secretkey ? Like:
registry:
serviceAccountName: ""
registry:
...
...
...
credentials:
username: "accesskey"
password: "secretkey"
Hello. It Was my misstake. My harbor user policy at AWS was incorrect, I was giving permission to the object No to the bucket.
Why was this closed? It sounds like there is an issue in the code, not the helm chart.
@stonezdj This had nothing to do with the Helm chart, could you re-open this?
If you annotate the registry pods with a Service Account that has an IRSA role attached to it, registry (at least) does not use it. This still appears to be the case in version 2.2.1 of Harbor.
The Go SDK got support for IRSAs a while ago, so I'm not sure why it is still failing. I do see something from Harbor is using my IRSA IAM Role, but performing docker pulls with an S3 registry backend fail with permission denied errors. We've had to revert to our old method of providing creds to harbor through applying the permissions to the harbor EC2 node IAM Policies (not static keys but similar/older auth method in AWS).
maybe related, unresolved issue: distribution/distribution#3275
upstream distribution is what is providing the s3 backend for harbor
Also slightly related, we're not running into this exact issue but we use kiam to assume role. If you have permissions on your role slightly off it throws a very unhelpful error. Linking in case others land here with the same issue. https://github.com/goharbor/harbor/issues/14792#issuecomment-832756994
I hope they can resolve IRSA issues, we plan to move to it soon as well.
This issue should be re-opened, I ran in to the same problem today. I'm trying to run Harbor in Fargate and I don't have the option of applying the IAM policies to an EC2 node so I have to revert all the way back to IAM access/secret key.
Seeing the same issue in EC2, it's the last thing we have that can't move to IRSA
We are experiencing the same issue also. Can this please be re-opened since it appears to have been closed in error? Thanks
harbor-core and docker distribution are both accessing s3.
Opening up this issue would only make sense if docker distribution would support IRSA.
I'm sorry but this should have not been closed. This is actually something which is not supported (yet) and should still be tracked.
I won't use harbor if I have to hardcode credentials everywhere, it's just dumb.
Opened this issue in docker's distribution/distribution project (which is what Harbor uses as its registry service) having confirmed that building the harbor/registry image using the registry
binary that can be compiled using the main
branch (or edge
tag) of that project resolves the problems we were having using IRSA, which were the same problems as those shared in this issue and elsewhere. To be absolutely clear though, these are the errors we were seeing prior to rebuilding the goharbor/registry
image using the very latest registry
binary from distribution
:
time="2022-10-25T20:55:40.080713951Z" level=error msg="response completed with error" auth.user.name="harbor_registry_user" err.code=unknown err.detail="s3aws: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors" err.message="unknown error" go.version=go1.17.7 http.request.host=<REDACTED> http.request.id=721d9cad-4893-4fa1-8704-db965a992ead http.request.method=HEAD http.request.remoteaddr=<REDACTED> http.request.uri="/v2/thanos/thanos/blobs/sha256:9ca801fd774b0039343d2b85a7f628cbdb23dfdb90c4d390cdc3230f5e3af836" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/3.10.0-1160.11.1.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.17 \(linux\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=137.472283ms http.response.status=500 http.response.written=104 vars.digest="sha256:9ca801fd774b0039343d2b85a7f628cbdb23dfdb90c4d390cdc3230f5e3af836" vars.name="thanos/thanos"
Could someone confirm if this could be related to this issue?
It looks like it is just impossible to specify an AWS IAM role that we want to assume, nor is there a default role that is required to be assumed
Semi-serious question, can distribution/distribution be migrated away from? The last v2.8.1 release is from March 2022 and there's still no sign of a v3 release so the project is now stuck in this state of limbo.
I've just come back from KubeCon EU 2023 itching to try out Harbor and now I find out I've got to go back to configuring access keys for S3 instead of just using IRSA like everything else in my cluster. That seems a pretty backward step.
@bodgit you can always build and run your docker/distribution build fork. It is not complicated.
At Harbor, we only would consider applying upstream patches if they are merged but not yet released. So, this means that it can only be done once those to are applied:
You can all help accelerate the approval process, by testing in your environments and verifying the correctness.
You can all help accelerate the approval process, by testing in your environments and verifying the correctness.
If you mean the approval process in the distribution PR, the maintainers have already stated they're not interested in bumping the AWS SDK in the 2.8.x branch, hence we're in this situation waiting for the 3.x release.
You can all help accelerate the approval process, by testing in your environments and verifying the correctness.
If you mean the approval process in the distribution PR, the maintainers have already stated they're not interested in bumping the AWS SDK in the 2.8.x branch, hence we're in this situation waiting for the 3.x release.
We already apply unreleased patches from upstream docker distribution, but only those that are merged but just not yet released. We can do the same in this case.
My statement should therefore serve as an encouragement to support the people of Docker Distribution.
We already apply unreleased patches from upstream docker distribution, but only those that are merged but just not yet released. We can do the same in this case.
That's awesome @Vad1mo! slightly unrelated, but just wanted to point out though that at least one patch is being applied currently in Harbor's registry artifact that was not accepted/merged upstream.
The redis sentinel patch: https://github.com/goharbor/harbor/blob/main/make/photon/registry/redis.patch
This was the upstream PR: https://github.com/distribution/distribution/pull/2886 but looks like it was not accepted.
I came across this recently while building a custom registry image to test out Concurrent tag lookup & Updated GCS library PRs.
These changes which will hopefully be accepted/merged improve GC perfromance dramatically!
My statement should therefore serve as an encouragement to support the people of Docker Distribution.
👍🏼 ❤️
My PR https://github.com/goharbor/harbor/pull/18686 solves using Harbor with IRSA.
Same Issue has been raised here https://github.com/goharbor/harbor/issues/18699. Closed as duplicate.
Hello Team,
Given that the AWS SDK supports assuming a role , pods running in EKS/GKE with the storage target as AWS S3 should be able to assume a role to connect to the S3 buckets.
Example or Brief can be found here : https://confluence.eng.vmware.com/display/public/AEAV/Service+User+Model
Versions: Please specify the versions of following systems.
harbor version: [2.3.3] (via helm chart) kubernetes 1.20.6 Cluster : GKE Storage : AWS S3
Expected behavior and actual behavior:
Expected: Pods using ServiceAccounts annotated to assume a role with should have access/denial to resources as specified in the policies attached to the role. These assume role credentials generates session token with the validity of 12 hours. Need a mechanism to re-establish connection with the AWS before the session token expiry.
Actual: Since pods are not assuming the role, one cannot, for instance, use s3 as a storage backend for the registry. Currently Harbor doesn't support this kind of AWS connectivity and it is a blocker for us to get on-boarded with VMware CloudGate.
It would be great if this feature request can be prioritised as
BLOKER : Currently It is a Hard blocker from Harbor for us to get TanzuNet Production AWS accounts to get on-boarded with VMware CloudGate. Having this feature request implemented resolves our blocker.
Let me know if any further details are required.
Thanks,
To add weightage, TanzuNetwork Production has high dependency/blocker on Assume Role feature, based on organisation legal demands. There is zero tolerance observed currently and its important for the team to maintain the production platform to be compliant https://github.com/goharbor/harbor/issues/18699 has been closed as duplicate to track here.
@Tejuvmware @GowriRegistry Other than using a patched version of the registry like in my PR, I'd like to suggest allocating resources to the upstream distribution/distribution
so that the next release (v3) can be cut soon. The current release branch will not have IRSA fixed, but it is already fixed on the main branch of distribution/distribution
.
Update: there is now a new alpha release from distribution which includes the update for the Go SDK. It's an alpha release. What is the appetite for using the alpha release in the. Makefile?
There's also v3.0.0-beta.1 now: https://github.com/distribution/distribution/releases/tag/v3.0.0-beta.1
Harbor team is keeping eyes on the upstream releases, and we will plan to do the integration test & validation. But, please aware that since the distribution is a crucial component of harbor, it maybe take longer than other bump up.
Given that the AWS SDK supports assuming a role through a WebIdentity, pods running in EKS should be able to assume a role with a web identity as documented here
Expected behavior and actual behavior:
Expected: Pods using ServiceAccounts annotated to assume a role with a web identity should have access/denial to resources as specified in the policies attached to the role.
Actual: Since pods are not assuming the role, one cannot, for instance, use s3 as a storage backend for the registry.
Steps to reproduce the problem: On EKS, annotate the serviceAccount and cycle the pods; you will see the environment variables AWS SDK needs, but the role is not assumed. Instead, the ec2 instance role is used.
Versions: Please specify the versions of following systems.
Additional context: