Closed Mauraza closed 5 months ago
/remove-kind bug
node@node~$ helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
"external-dns" has been added to your repositories
node@node:~$ helm upgrade --install external-dns external-dns/external-dns
Release "external-dns" does not exist. Installing it now.
NAME: external-dns
LAST DEPLOYED: Thu Jun 15 12:36:46 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
* External DNS *
***********************************************************************
Chart version: 1.13.0
App version: 0.13.5
Image tag: registry.k8s.io/external-dns/external-dns:v0.13.5
***********************************************************************
node@node:~$
@Mauraza please share all steps you followed
/kind support /assign
Hi @kundan2707,
That are the steps I followed
$ helm upgrade --install external-dns external-dns/external-dns
Release "external-dns" does not exist. Installing it now.
NAME: external-dns
LAST DEPLOYED: Thu Jun 15 10:11:01 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
* External DNS *
***********************************************************************
Chart version: 1.13.0
App version: 0.13.5
Image tag: registry.k8s.io/external-dns/external-dns:v0.13.5
***********************************************************************`
but after the pod gets ready the status change to error:
$ k get po -w
NAME READY STATUS RESTARTS AGE
external-dns-6897b97bd8-z58bv 0/1 Running 0 8s
external-dns-6897b97bd8-z58bv 1/1 Running 0 20s
external-dns-6897b97bd8-z58bv 0/1 Error 0 32s
external-dns-6897b97bd8-z58bv 0/1 Running 1 (1s ago) 33s
external-dns-6897b97bd8-z58bv 1/1 Running 1 (8s ago) 40s
external-dns-6897b97bd8-z58bv 0/1 Error 1 (27s ago) 59s
external-dns-6897b97bd8-z58bv 0/1 CrashLoopBackOff 1 (2s ago) 60s
external-dns-6897b97bd8-z58bv 0/1 Running 2 (16s ago) 74s
external-dns-6897b97bd8-z58bv 1/1 Running 2 (22s ago) 80s
external-dns-6897b97bd8-z58bv 0/1 Error 2 (42s ago) 100s
external-dns-6897b97bd8-z58bv 0/1 CrashLoopBackOff 2 (2s ago) 101s
If a check the logs appears this:
$ k logs -f external-dns-6897b97bd8-z58bv
time="2023-06-15T08:13:10Z" level=info msg="config: {APIServerURL: KubeConfig: RequestTimeout:30s DefaultTargets:[] ContourLoadBalancerService:heptio-contour/contour GlooNamespace:gloo-system SkipperRouteGroupVersion:zalando.org/v1 Sources:[service ingress] Namespace: AnnotationFilter: LabelFilter: IngressClassNames:[] FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false IgnoreIngressTLSSpec:false IgnoreIngressRulesSpec:false GatewayNamespace: GatewayLabelFilter: Compatibility: PublishInternal:false PublishHostIP:false AlwaysPublishNotReadyAddresses:false ConnectorSourceServer:localhost:8080 Provider:aws GoogleProject: GoogleBatchChangeSize:1000 GoogleBatchChangeInterval:1s GoogleZoneVisibility: DomainFilter:[] ExcludeDomains:[] RegexDomainFilter: RegexDomainExclusion: ZoneNameFilter:[] ZoneIDFilter:[] TargetNetFilter:[] ExcludeTargetNets:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSAssumeRoleExternalID: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AWSPreferCNAME:false AWSZoneCacheDuration:0s AWSSDServiceCleanup:false AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: AzureSubscriptionID: AzureUserAssignedIdentityClientID: BluecatDNSConfiguration: BluecatConfigFile:/etc/kubernetes/bluecat.json BluecatDNSView: BluecatGatewayHost: BluecatRootZone: BluecatDNSServerName: BluecatDNSDeployType:no-deploy BluecatSkipTLSVerify:false CloudflareProxied:false CloudflareDNSRecordsPerPage:100 CoreDNSPrefix:/skydns/ RcodezeroTXTEncrypt:false AkamaiServiceConsumerDomain: AkamaiClientToken: AkamaiClientSecret: AkamaiAccessToken: AkamaiEdgercPath: AkamaiEdgercSection: InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 InfobloxFQDNRegEx: InfobloxNameRegEx: InfobloxCreatePTR:false InfobloxCacheDuration:0 DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml OCICompartmentOCID: OCIAuthInstancePrincipal:false InMemoryZones:[] OVHEndpoint:ovh-eu OVHApiRateLimit:20 PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:upsert-only Registry:txt TXTOwnerID:default TXTPrefix: TXTSuffix: TXTEncryptEnabled:false TXTEncryptAESKey: Interval:1m0s MinEventSyncInterval:5s Once:false DryRun:false UpdateEvents:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s TXTWildcardReplacement: ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: ResolveServiceLoadBalancerHostname:false RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136GSSTSIG:false RFC2136KerberosRealm: RFC2136KerberosUsername: RFC2136KerberosPassword: RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false RFC2136MinTTL:0s RFC2136BatchChangeSize:50 NS1Endpoint: NS1IgnoreSSL:false NS1MinTTLSeconds:0 TransIPAccountName: TransIPPrivateKeyFile: DigitalOceanAPIPageSize:50 ManagedDNSRecordTypes:[A AAAA CNAME] GoDaddyAPIKey: GoDaddySecretKey: GoDaddyTTL:0 GoDaddyOTE:false OCPRouterName: IBMCloudProxied:false IBMCloudConfigFile:/etc/kubernetes/ibmcloud.json TencentCloudConfigFile:/etc/kubernetes/tencent-cloud.json TencentCloudZoneType: PiholeServer: PiholePassword: PiholeTLSInsecureSkipVerify:false PluralCluster: PluralProvider:}"
time="2023-06-15T08:13:10Z" level=info msg="Instantiating new Kubernetes client"
time="2023-06-15T08:13:10Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2023-06-15T08:13:10Z" level=info msg="Created Kubernetes client https://10.96.0.1:443"
time="2023-06-15T08:13:36Z" level=fatal msg="records retrieval failed: failed to list hosted zones: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors"
Thanks for your help.
Hi @kundan2707,
Could you reproduce the issue?
@Mauraza
issue was not reproducible to me.
Hi @kundan2707,
I follow these steps https://github.com/kubernetes-sigs/external-dns/issues/3680#issuecomment-1592487877 in Minikube. Are there any missing settings?
Could you tell me why appears this error time="2023-07-05T08:30:30Z" level=fatal msg="records retrieval failed: failed to list hosted zones: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors"
?
Hi @Mauraza ,
I don't know your exact setup, but I encountered the very same issue while deploying external-dns using the Helm chart.
After hours of digging, I found that an environment variable was missing (it's also not documented at all on the chart).
So here are the Helm values that made it work for me:
namespaced: true
triggerLoopOnEvent: true
env:
- name: AWS_SHARED_CREDENTIALS_FILE
value: /.aws/credentials
secretConfiguration:
enabled: true
mountPath: /.aws
data:
credentials: |
[default]
aws_access_key_id = *********
aws_secret_access_key = *************
The missing part was the value under the env
key, that tells the external-dns aws provider where to find the credentials.
Have a good day !
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Hi @Mauraza ,
I don't know your exact setup, but I encountered the very same issue while deploying external-dns using the Helm chart.
After hours of digging, I found that an environment variable was missing (it's also not documented at all on the chart).
So here are the Helm values that made it work for me:
namespaced: true triggerLoopOnEvent: true env: - name: AWS_SHARED_CREDENTIALS_FILE value: /.aws/credentials secretConfiguration: enabled: true mountPath: /.aws data: credentials: | [default] aws_access_key_id = ********* aws_secret_access_key = *************
The missing part was the value under the
env
key, that tells the external-dns aws provider where to find the credentials.Have a good day !
Works fro me as well, unfortunately the docu could be better for this case
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened: I'm, trying to deploy the last version external-dns with the default values and appears the following error:
What you expected to happen: The install is going well and the pod no finish with the status CrashLoopBackOff
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
external-dns --version
): v20230529-v0.13.5