Open awx-fuyuanchu opened 1 year ago
I have the same issue
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
bump. any thoughts?
/remove-lifecycle stale
This works mean that we would have state, which is bad in general. I think we do something in aws provider to work around the problem. I am not sure anymore but I think we split the change in half and try both and one will succeed and the other fail. Next iterations should fix the next quarter so we converge to a good state. People tried multiple times to fallback to single entry changes but this will quickly consume all api quotas and you end up being rate limited. I think we should do a binary search style apply in general. I can review a pr, if someone creates a change.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
What would you like to be added: I'm using Google provider and have an issue while updating DNS records via
external-dns
. After a few investigations, we found that there were some errors returned fromgoogleapi
and told us it failed to update the record.time="2023-01-11T06:29:22Z" level=error msg="googleapi: Error 409: The resource 'entity.change.additions[2]' named 'xxxxxxxxxxx. (TXT)' already exists, alreadyExists"
However, changes are separated into batches before sending to the provider. If one entry in the batch is not valid, it'll cause the whole batch to fail to update the provider.
Here is the log we found on GCP that shows the request containing batch changes failed.
So I'm here to request a feature that
external-dns
could identify the entries that break the request and remove them in the next loop. Maybe have another loop to handle the invalid entries.Why is this needed: With this feature, an invalid record won't block the other records.