Closed jcjp closed 1 year ago
Dyngoose uses the DynamoDB defaults for new tables, which is very minimal and not well suited to production. By default it uses provisioned capacity with 5 read and 5 write capacity units.
I recommend switching the billing mode to per request, which is available as a property on the @Table decorator. It will prevent the issue you mentioned and works great for most tables in production and especially well for development. If you have table with predictable usage I would recommend auto scaling capacity and pre paying for capacity units to save money. I don't think there is a common use case for provisioned capacity without auto scaling.
If you keep with provisioned capacity, you can also configure the units and enable auto scaling from the table decorator properties (autoscaling only works with cdk, not straight CloudFormation templates).
Hopefully that helps.
Dyngoose uses the DynamoDB defaults for new tables, which is very minimal and not well suited to production. By default it uses provisioned capacity with 5 read and 5 write capacity units.
I recommend switching the billing mode to per request, which is available as a property on the @table decorator. It will prevent the issue you mentioned and works great for most tables in production and especially well for development. If you have table with predictable usage I would recommend auto scaling capacity and pre paying for capacity units to save money. I don't think there is a common use case for provisioned capacity without auto scaling.
If you keep with provisioned capacity, you can also configure the units and enable auto scaling from the table decorator properties (autoscaling only works with cdk, not straight CloudFormation templates).
Hopefully that helps.
Thanks that's very helpful, I think we don't use Dyngoose for the configuration of our AWS resources we will have to do that on the AWS console itself or via our infrastructure codes.
Description:
I am encountering an error when interfacing with AWS DynamoDB on updating consecutive items, it throws a throttle error.
The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
I found this on AWS knowledge center and was wondering what is the default configuration of Dyngoose or if we can manually configure the exponential backoff that might solve the issue: Why is my Amazon DynamoDB table being throttled?
Before I proceed to look into the other solutions like increasing the capacity and such, thanks!