google / or-tools

Google's Operations Research tools:
https://developers.google.com/optimization/
Apache License 2.0
11.25k stars 2.13k forks source link

OR-Tools, Increase Resource Consumption #561

Closed Migaelh closed 6 years ago

Migaelh commented 6 years ago

This is not an issue but rather than a question.

I am using OR-Tools (c# version) with a CVRPTW solution.

It is a fantastic library! However, I am running the solution with 1400-2000 locations so that translates to about 1 960 000 - 4 000 000 possible distances between the locations, thus the processing time can take a while.

Is there a way to give more resources to the application to do this processing faster? At the moment it only uses 18%-20% of my CPU usage and < 8% memory. I want to increase these usages with a lot as I have a dedicated server for this solution.

My current specs are: CPU: Intel i7 7th gen, quad core 2.8ghz Ram: 16gb DDR3 Disk: 500GB SSD

Mizux commented 6 years ago

Hi, which solver do you use ?

I guess some solvers are not multi-threaded which could explain the CPU usage. Usually AFAIK free OSS solver are mono thread while commercial "could be" multi-threaded.

note: Cbc (CoinUtils) check for pthread but I don't know why... EDIT "Belief":

Migaelh commented 6 years ago

I am using the ConstraintSolver (as this was used in the examples)

I am also using a 2d array for the distances, will a hashmap aso increase the performance (theres about 2 000 000 elements in the 2d array)?

On 19 Jan 2018 16:27, "Mizux" notifications@github.com wrote:

Hi, which solver do you use ?

I guess some solvers are not multi-threaded which could explain the CPU usage. Usually AFAIK free OSS solver are mono thread while commercial "could be" multi-threaded.

note: Cbc (CoinUtils) check for pthread but I don't know why...

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/google/or-tools/issues/561#issuecomment-358980185, or mute the thread https://github.com/notifications/unsubscribe-auth/AfSlYmyVHicWzGCeP-o0garBlHB2AaR6ks5tMKZAgaJpZM4RkUIM .

furnon commented 6 years ago

A 2d matrix should be the fastest option. There might be some overhead due to the distance function being in C# and the solver running in C++. The overhead can be reduced (at the cost of memory consumption) by telling the solver to cache distances (and any other callback for that matter). For that specify RoutingModelParameters (built from DefaultModelParameters()) when building your RoutingModel and set max_callback_cache_size to a value greater than the number of nodes in the problem.

On Fri, Jan 19, 2018 at 5:14 PM, Migaelh notifications@github.com wrote:

I am using the ConstraintSolver (as this was used in the examples)

I am also using a 2d array for the distances, will a hashmap aso increase the performance (theres about 2 000 000 elements in the 2d array)?

On 19 Jan 2018 16:27, "Mizux" notifications@github.com wrote:

Hi, which solver do you use ?

I guess some solvers are not multi-threaded which could explain the CPU usage. Usually AFAIK free OSS solver are mono thread while commercial "could be" multi-threaded.

note: Cbc (CoinUtils) check for pthread but I don't know why...

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/google/or-tools/issues/561#issuecomment-358980185, or mute the thread https://github.com/notifications/unsubscribe-auth/AfSlYmyVHicWzGCeP- o0garBlHB2AaR6ks5tMKZAgaJpZM4RkUIM .

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/google/or-tools/issues/561#issuecomment-359012556, or mute the thread https://github.com/notifications/unsubscribe-auth/AKjyUnXYJWqwg2iuPoD69PcZAZYgPJIGks5tML9lgaJpZM4RkUIM .

Migaelh commented 6 years ago

Thank you I will try it

On 19 Jan 2018 18:26, "furnon" notifications@github.com wrote:

A 2d matrix should be the fastest option. There might be some overhead due to the distance function being in C# and the solver running in C++. The overhead can be reduced (at the cost of memory consumption) by telling the solver to cache distances (and any other callback for that matter). For that specify RoutingModelParameters (built from DefaultModelParameters()) when building your RoutingModel and set max_callback_cache_size to a value greater than the number of nodes in the problem.

On Fri, Jan 19, 2018 at 5:14 PM, Migaelh notifications@github.com wrote:

I am using the ConstraintSolver (as this was used in the examples)

I am also using a 2d array for the distances, will a hashmap aso increase the performance (theres about 2 000 000 elements in the 2d array)?

On 19 Jan 2018 16:27, "Mizux" notifications@github.com wrote:

Hi, which solver do you use ?

I guess some solvers are not multi-threaded which could explain the CPU usage. Usually AFAIK free OSS solver are mono thread while commercial "could be" multi-threaded.

note: Cbc (CoinUtils) check for pthread but I don't know why...

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/google/or-tools/issues/561#issuecomment-358980185, or mute the thread https://github.com/notifications/unsubscribe-auth/AfSlYmyVHicWzGCeP- o0garBlHB2AaR6ks5tMKZAgaJpZM4RkUIM .

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/google/or-tools/issues/561#issuecomment-359012556, or mute the thread https://github.com/notifications/unsubscribe-auth/ AKjyUnXYJWqwg2iuPoD69PcZAZYgPJIGks5tML9lgaJpZM4RkUIM .

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/google/or-tools/issues/561#issuecomment-359016328, or mute the thread https://github.com/notifications/unsubscribe-auth/AfSlYq-FRKct_x94FUFAf1ZBXgb6J6j6ks5tMMIcgaJpZM4RkUIM .

Migaelh commented 6 years ago

I have tried @furnon's feedback, it did use more memory (went up from 17mb to 1.3GB). I managed to run the solution with 1400 nodes in 16minutes.

Thanks for the help