Substra / substra

Low-level Python library used to interact with a Substra network
https://docs.substra.org
Apache License 2.0
271 stars 33 forks source link

`add_compute_plan` - which batch size to use #282

Open Esadruhn opened 2 years ago

Esadruhn commented 2 years ago

Summary

When we add a compute plan with N tasks, we can set autobatching to True and set the batch size. This submits the tasks to the backend by batches of size batch_size. The fastest option is to increase the batch size as much as possible without getting backend errors.

The default batch size is 500, the question here is: how to find the maximal batch size we can use?

What happens when the batch size is too big

When the batch size is too big (451 tasks * 400 data samples per task), we get the following error

Requests error status 429: {"message":"grpc: received message larger than max (6228668 vs. 4194304)"}
Traceback (most recent call last):
  File "HIDDEN/substra/sdk/backends/remote/rest_client.py", line 114, in __request
    r.raise_for_status()
  File "HIDDEN/requests/models.py", line 960, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: HIDDEN/task/bulk_create/
Esadruhn commented 2 years ago

@AurelienGasser you said that size_of_grpc_packet = const_value * number_of_samples_per_task * number_of_tasks_per_batch

Rule of thumb - optimal batch size

If the max value is always the same, and we assume that all tasks have the same number of data samples,

then, from the error shown in the description,

so, in this formula: size_of_grpc_packet = const_value * number_of_samples_per_task * number_of_tasks_per_batch

const_value = 6228668 / (451*400) ≈ 34, so the max value of number_of_samples_per_task * number_of_tasks_per_batch should be 4194304/34 = 123361 ≈ 120000

so is it correct to say that batch_size = math.floor(120000 / number_of_samples_per_task) would be a good approximation?

mblottiere commented 2 years ago

There is no "one size fits all" batch size. It depends on both the number of tasks and the number of inputs.

Would it be feasible to catch this error at the SDK level and lower batch size before retry?

Esadruhn commented 2 years ago

There is no "one size fits all" batch size. It depends on both the number of tasks and the number of inputs.

But if we can have a rule of thumb on which batch size works, we can indicate to the user what value to use. The default batch size used to be 20 and it was very slow, so I think it's good to give an idea of how high the batch size can be for a particular use case

Would it be feasible to catch this error at the SDK level and lower batch size before retry?

Sure, we can try that, I would do that on top of documenting the "optimal batch size" If we retry automatically, is there a risk that the backend/orchestrator are busy because of the previous call and fail?

We should also expose the batch size in substrafl, today only the autobatching argument is exposed, so when it's true, the default batch size is used.

RomainGoussault commented 2 years ago

Do we have data on how much slower it is to have a small batch size vs a big batch size?

Esadruhn commented 2 years ago

@tanguy-marchand from what you said, 15 rounds, 136 tuples, with 1257 data samples took 5min to submit?

(1257 samples per task or in total? The number that we are interested in is the number of samples per task)

AurelienGasser commented 2 years ago

Is the max value of size_of_grpc_packet always the same, or does it depend on a deployment configuration?

It's always the same. We could change it but have chosen no to so far. The limit serves the purpose of limiting the load on the server and avoid resource starvation.

tanguy-marchand commented 2 years ago

@tanguy-marchand from what you said, 15 rounds, 136 tuples, with 1257 data samples took 5min to submit?

A CP using 2 centers (with respectively 1257 and 999 samples) and 30 rounds (overall 512 tuples) takes 17 minutes to submit

Esadruhn commented 2 years ago

OK, so as a first fix what we can do is:

then discuss a better solution:

I think the best would be if we are able to calculate it but I am worried it would slow down the execution and we'll want to keep being able to override this if for any reason the calculation is wrong.