But for provisioned mode with autoscaling and low table minimum throughput [say, 100] it ends up significantly underestimating per task throughput for smaller inputs. This works out okay when there's a large enough input so actual mappers is similar to max mappers, but for smaller inputs it can get stuck at a low throughput and never trigger autoscale.
We've seen larger inputs trigger autoscaling all the way up to 500k which is what we want. But for smaller inputs we saw a case where it kept on writing at 1 task/minute since there were just 8 tasks instead of the max 48 the library assumes so we never hit the autoscale threshold.
Is it possible to look at the actual number of splits instead?
Right now, it looks like the lib uses the max number of possible tasks for calculating target throughput per task: https://github.com/awslabs/emr-dynamodb-connector/blob/ee52fdfbd26567eb644470a69ec54919f2cb990b/emr-dynamodb-hadoop/src/main/java/org/apache/hadoop/dynamodb/write/WriteIopsCalculator.java#L82
maxParallelTasks = Math.min(calculateMaxMapTasks(totalMapTasks), totalMapTasks);
But for provisioned mode with autoscaling and low table minimum throughput [say, 100] it ends up significantly underestimating per task throughput for smaller inputs. This works out okay when there's a large enough input so actual mappers is similar to max mappers, but for smaller inputs it can get stuck at a low throughput and never trigger autoscale.
We've seen larger inputs trigger autoscaling all the way up to 500k which is what we want. But for smaller inputs we saw a case where it kept on writing at 1 task/minute since there were just 8 tasks instead of the max 48 the library assumes so we never hit the autoscale threshold.
Is it possible to look at the actual number of splits instead?