Open enkuru opened 1 year ago
I do have the very same problem, did you find a workaround?
using .autoStartup(enabled) worked for me
Same infinite polling loop with authorization issues:
io.awspring.cloud.sqs.listener.source.AbstractPollingMessageSource - Error polling for messages in queue https://sqs.eu-central-1.amazonaws.com/.../queue.fifo
java.util.concurrent.CompletionException: software.amazon.awssdk.services.sqs.model.SqsException: The security token included in the request is invalid. (Service: Sqs, Status Code: 403, Request ID: 6f564cea-...)
at software.amazon.awssdk.utils.CompletableFutureUtils.errorAsCompletionException(CompletableFutureUtils.java:65)
or
io.awspring.cloud.sqs.listener.source.AbstractPollingMessageSource - Error polling for messages in queue https://sqs.eu-central-1.amazonaws.com/.../queue.fifo
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to load credentials from any of the providers in the chain AwsCredentialsProviderChain(credentialsProviders=[SystemPropertyCredentialsProvider(), EnvironmentVariableCredentialsProvider(), WebIdentityTokenCredentialsProvider(), ProfileCredentialsProvider(profileName=default, profileFile=ProfileFile(sections=[])), ContainerCredentialsProvider
We also found no way to configure a back off.
I have the same issue. Any worksarounds ? Thanks in advance.
The logs above has been repeated nearly 1 million times in a 25 minutes period due to accidental deletion of the related queue. This caused to max usage of the CPU on the servers (in Elastic Beanstalk) and not receiving the incoming requests.
Our message consuming class is;
and Configuration class is;
As you can see, we have a simple configuration. We have tried to find a solution to this problem (to slow down polling rate on failure), we could not succeeded.
I think the issue is on this code as it do not consider alike situations and runs infinitely;
AbstractPollingMessageSource.java::191