Closed spotlightgit closed 4 years ago
There is no documentation outside of this repo. Chunk size is the number of iterations that are sent to each worker to run at a time. If each iteration is fast, then a larger chunk size can reduce the amount of communication with workers.
The message you're reporting is fine. It just means that the time for 1 iteration is slower than the threshold, so it sets chunk size to 1. This is expected behaviour.
You can set the chunk size to 1 in the call to batch_job_submit by setting the chunk_minmax argument to [1 1]. This will save running that check and remove the message.
Hey, I just tried to adapt the example to get used to this toolbox. I want to parallelize by using "batch_job_distrib". In the example is everything working fine. After replacing the "slow_func" with my own function (an m-file script which evaluates a Simulink model). I got this command line output: "Failed to estimate chunk size. Setting to 1." Increasing the waiting time (in Batch_job_submit line 144):
does not change anything. Any idea why this is happening? I use MATLAB 2016b on Windows 10. Is there any literature (maybe a paper) which gives a deeper description of this toolbox? What is a chunk size? Why is it calculated?