Closed GuyBenhaim closed 5 years ago
This is local search, so optimality is lost anyway. It is much more efficient to decompose the whole problem logically (time windows, geographical data...) that let the solver do it. You can always have sequences of sub-models that overlap.
Short answer, no progress. Long answer, you are asking the solver to decompose anyway :-) Laurent Perron | Operations Research | lperron@google.com | (33) 1 42 68 53 00
Le mar. 23 avr. 2019 à 18:01, Guy notifications@github.com a écrit :
Hello, I added the following comment to #778 https://github.com/google/or-tools/issues/778 a few days back, but maybe it's hidden, as that issue was closed.
>> Has there been any progress on this front?
Running VRP (with many nodes) using powerful Server different CPU cores participate, during different 'time-slices', but most of the time the cores are not used. The result is no real improvement over single-core CPU.
Is there any way to structure the model such that inherent parallelizing actually becomes effective?
Of course, decomposing the problem and solving separately is possible but affects optimality.
Guy
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/google/or-tools/issues/1213, or mute the thread https://github.com/notifications/unsubscribe-auth/ACUPL3JWLMOBAA3GWK56UO3PR4XF5ANCNFSM4HHZNCBQ .
My concern is that 'slicing' the problem would provide worse statistics in terms of possible combinations from which to choose and optimize the result. Of course, this means lower computation requirements...
I was wondering if the was a way to multithread low-level tasks, for example, checking the set of constraints for a certain node.
Finally, for "You can always have sequences of sub-models that overlap." - can you elaborate?
Hello Laurent
You can always have sequences of sub-models that overlap.
Can you explain what you mean by this?
For example let's take the example of nurse scheduling. So you would split 60 nurses into batches of 5 nurses and calculate these sub-models in sequence. But how exactly would you let these sub-models "overlap" ?
Thank you for your explanation
Adi
do 2 weeks at a time
weeks 0 and 1, then 1 and 2, then 2 and 3... Laurent Perron | Operations Research | lperron@google.com | (33) 1 42 68 53 00
Le jeu. 25 avr. 2019 à 16:25, adihe notifications@github.com a écrit :
Hello Laurent
You can always have sequences of sub-models that overlap.
Can you explain what you mean by this?
For example let's take the example of nurse scheduling. So you would split 60 nurses into batches of 5 nurses and calculate these sub-models in sequence. But how exactly would you let these sub-models "overlap" ?
Thank you for your explanation
Adi
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/google/or-tools/issues/1213#issuecomment-486694737, or mute the thread https://github.com/notifications/unsubscribe-auth/ACUPL3MZQ6VA7AK55JVYHQDPSG5MFANCNFSM4HHZNCBQ .
??
Hello, I added the following comment to #778 a few days back, but maybe it's hidden, as that issue was closed.
>> Has there been any progress on this front?
Running VRP (with many nodes) using powerful Server different CPU cores participate, during different 'time-slices', but most of the time the cores are not used. The result is no real improvement over single-core CPU.
Is there any way to structure the model such that inherent parallelizing actually becomes effective?
Of course, decomposing the problem and solving separately is possible but affects optimality.
Guy