Open maxbog opened 3 months ago
Hello, Interesting and you're probably right. WDYT @ThaSami ?
I believe I am seeing this issue as well. Definitely paying attention to that PR.
@JorTurFer any chance for a review and, hopefully, merge of the attached PR?
@JorTurFer bumping for review on the PR
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity.
Can this be re-opened until the next release occurs?
Report
Hello, I have two deployments using ScaledObjects based on SQS queues in different regions (say, eu-central-1 and us-east-1) and I want to authenticate to AWS using pod identity. The first ScaledObject authenticates correctly (one running on eu-central-1), and then the AWS config (with region included) is cached in the config cache. The second ScaledObject fails to start, because the operator tries to connect to a queue in another region (us-east-1), but the cached config includes the region from the first queue (eu-central-1). If I understand the code correctly, the
getCacheKey
function here: https://github.com/kedacore/keda/blob/85d4dca17f9e2e58bdc91f046e6dbe8e6235e78f/pkg/scalers/aws/aws_config_cache.go#L71 needs to include region in the returned string so that the configs are cached per region.Expected Behavior
Both ScaledObjects report as Ready
Actual Behavior
Only the first ScaledObject is ready, the second one never authenticates successfully.
Steps to Reproduce the Problem
1.Create two queues in different region 2.Create ScaledObjects for them using pod identity as auth mechanism
Logs from KEDA operator
KEDA Version
2.15.0
Kubernetes Version
1.30
Platform
Amazon Web Services
Scaler Details
AWS SQS
Anything else?
No response