The following throttling limits apply to using Security Hub API operations.
BatchEnableStandards - RateLimit of 1 request per second, BurstLimit of 1 request per second.
GetFindings - RateLimit of 3 requests per second. BurstLimit of 6 requests per second.
BatchImportFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second.
BatchUpdateFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second.
UpdateStandardsControl - RateLimit of 1 request per second, BurstLimit of 5 requests per second.
All other operations - RateLimit of 10 requests per second. BurstLimit of 30 requests per second.
There isn't a lot of bandwidth here. Boto3 already retries requests in case of rate-limits happening, but it will exhaust after a few times. Mostly, the energy labeler itself runs fine, but when you add other tools into the mix that also interact with the SecHub API, problems will start to appear and both this tool as well as other tools will start to crash if they do not expect rate limits to happen.
I don't really have a solution or suggestion to address this issue, but some ideas:
Catch rate limit exceptions/errors and do some retrying on our own.
Add option/hook to intentionally slow down the rate of outgoing API requests. This would be handy if you know in advance you have more tools talking to the SecHub API.
From the AWS documentation:
There isn't a lot of bandwidth here. Boto3 already retries requests in case of rate-limits happening, but it will exhaust after a few times. Mostly, the energy labeler itself runs fine, but when you add other tools into the mix that also interact with the SecHub API, problems will start to appear and both this tool as well as other tools will start to crash if they do not expect rate limits to happen.
I don't really have a solution or suggestion to address this issue, but some ideas: