Open Esadruhn opened 2 years ago
@AurelienGasser you said that size_of_grpc_packet = const_value * number_of_samples_per_task * number_of_tasks_per_batch
size_of_grpc_packet
always the same, or does it depend on a deployment configuration?If the max value is always the same, and we assume that all tasks have the same number of data samples,
then, from the error shown in the description,
so, in this formula: size_of_grpc_packet = const_value * number_of_samples_per_task * number_of_tasks_per_batch
const_value = 6228668 / (451*400) ≈ 34
,
so the max value of number_of_samples_per_task * number_of_tasks_per_batch
should be 4194304/34 = 123361 ≈ 120000
so is it correct to say that batch_size = math.floor(120000 / number_of_samples_per_task)
would be a good approximation?
There is no "one size fits all" batch size. It depends on both the number of tasks and the number of inputs.
Would it be feasible to catch this error at the SDK level and lower batch size before retry?
There is no "one size fits all" batch size. It depends on both the number of tasks and the number of inputs.
But if we can have a rule of thumb on which batch size works, we can indicate to the user what value to use. The default batch size used to be 20 and it was very slow, so I think it's good to give an idea of how high the batch size can be for a particular use case
Would it be feasible to catch this error at the SDK level and lower batch size before retry?
Sure, we can try that, I would do that on top of documenting the "optimal batch size" If we retry automatically, is there a risk that the backend/orchestrator are busy because of the previous call and fail?
We should also expose the batch size in substrafl, today only the autobatching
argument is exposed, so when it's true, the default batch size is used.
Do we have data on how much slower it is to have a small batch size vs a big batch size?
@tanguy-marchand from what you said, 15 rounds, 136 tuples, with 1257 data samples took 5min to submit?
(1257 samples per task or in total? The number that we are interested in is the number of samples per task)
Is the max value of size_of_grpc_packet always the same, or does it depend on a deployment configuration?
It's always the same. We could change it but have chosen no to so far. The limit serves the purpose of limiting the load on the server and avoid resource starvation.
@tanguy-marchand from what you said, 15 rounds, 136 tuples, with 1257 data samples took 5min to submit?
A CP using 2 centers (with respectively 1257 and 999 samples) and 30 rounds (overall 512 tuples) takes 17 minutes to submit
OK, so as a first fix what we can do is:
batch_size = math.floor(120000 / number_of_samples_per_task)
)then discuss a better solution:
I think the best would be if we are able to calculate it but I am worried it would slow down the execution and we'll want to keep being able to override this if for any reason the calculation is wrong.
Summary
When we add a compute plan with N tasks, we can set
autobatching
to True and set the batch size. This submits the tasks to the backend by batches of sizebatch_size
. The fastest option is to increase the batch size as much as possible without getting backend errors.The default batch size is 500, the question here is: how to find the maximal batch size we can use?
What happens when the batch size is too big
When the batch size is too big (451 tasks * 400 data samples per task), we get the following error