Closed taox142 closed 7 months ago
Anyone had the same issue before? It it quite tricky as this issue is not viewed as an error or exception. Otherwise, it can be handled by "try except" logic in python.
Dear @taox142, Sorry for a late reply.
Probably there is a bug in the code. It would be tricky to debug as it appears after 2000 seconds of runtime. What is the smallest instance for which you encountered this issue? I would need a JSON file for the this smallest instance.
One workaround is deactivate dynamic bucket graph regeneration through advanced parameterisation (by specifying a config_file
). I need my other computer to give the exact parameter to change. I will do it later.
Dear Ruslan (@rrsadykov),
Thank you for looking into this issue. I've attached a smaller case with 286 nodes. This instance will trigger the issue very quickly.
Thanks again. And would love to learn more about the workaround solution.
I have managed to reproduce the bug, thank you for spotting it! I will debug when I have time.
Meanwhile, bc.cfg.txt is the configuration file to give to VRPSolverEasy, through config_file property to workaround the bug.
I have resolved bug, it is tolerance-related, but I do not know how long it takes to "propagate" this code change to VRPSolverEasy. Meanwhile, I think you can avoid the bug if you round your time window bounds with precision less than 6 (number of digits after the decimal point).
@najibprog, I have updated the RCSP solver code to version 0.6.11. Does it take long to update BaPCod binaries with it? We should also include this advise in the documentation to avoid using very high precision floating point numbers with VRPSolverEasy.
Hi Ruslan (@rrsadykov),
Thank you for your help. Changing the config does solve the issue.
But as for the second solution you suggested, it didn't work for me. I tried to round the time window bounds to 5,4,3 and even 1 digits after the decimal point, but none of them solve the issue.
Below please find the model json file where I rounded the number to 3 digits.
Regards, Tao
I have managed to reproduce the bug, thank you for spotting it! I will debug when I have time.
Meanwhile, bc.cfg.txt is the configuration file to give to VRPSolverEasy, through config_file property to workaround the bug.
One more question, what feature do we lose when switching off the dynamic adjustment bucket steps? Will the solver be slower when we do that?
But as for the second solution you suggested, it didn't work for me. I tried to round the time window bounds to 5,4,3 and even 1 digits after the decimal point, but none of them solve the issue.
Below please find the model json file where I rounded the number to 3 digits.
Thank you for testing the rounding and for the JSON file. I will investigate this issue further.
One more question, what feature do we lose when switching off the dynamic adjustment bucket steps? Will the solver be slower when we do that?
We can lose some performance for some instances but it's not dramatic. You can read about this technique in this paper: https://inria.hal.science/hal-02378624/document, search for "dynamic adjustment of parameter ξ", Figure 6(a) gives the performance profile for this technique. Parameter RCSPnumberOfBucketsPerVertex
in the config file corresponds to ξ.
We can lose some performance for some instances but it's not dramatic. You can read about this technique in this paper: https://inria.hal.science/hal-02378624/document, search for "dynamic adjustment of parameter ξ", Figure 6(a) gives the performance profile for this technique. Parameter
RCSPnumberOfBucketsPerVertex
in the config file corresponds to ξ.
Thank you. I will check out the paper to learn more details.
I have checked. I seems this bug cannot be workaround by modifying the data, only by switching off dynamic bucket graph regeneration in the config file. The bug is fixed in the code, but I do not know when the new version will be released.
Hi @rrsadykov thanks for fixing the bug in RCSP solver, the new version can be published as soon as I return, i.e. in the week of December 4st.
Best regards, Najib
Hi Ruslan (@rrsadykov),
Need your help again. I ran into this instance. If I run without the bc.cfg.txt, it will return a feasible solution, but if I set parameter with the config file, it generates a solution which violates the vehicle capacity condition.
Please see the instance attached. It is not large that you should be able to reproduce the issue in seconds. capacity_violation.json
Thanks again, Tao
Dear @taox142,
I was not able for now to reproduce this issue. Can you please add DEFAULTPRINTLEVEL = 0
to the config file, run again and give the output of VRPSolverEasy. The infeasibility is detected by VRPSolverEasy or it is you who detect it? If you have infeasible solution, can you give it too.
@najibprog is back to office next week so I think VRPSolverEasy will be updated very soon to the new version which addresses the initial bug.
Ruslan
Dear @rrsadykov ,
Sorry for the late reply. Let me clarify the issue.
The model is the same. But running with and without the config file will generate two different solutions. And I checked, the one generated with the config file violates the capacity constraints.
Running with config.txt, model output is with config file output.txt. Cost = 48688.6.
Running without the config file(using the default solver parameters), model output is without config file output.txt. Cost = 49188.2.
In the 48688.6 solution, I used the following code to check the capacity feasibility:
for index, route in enumerate(model.solution.routes):
print(f"Route No.: {index}.")
print(f"Ids : {route.point_ids}.")
print(f"Points demand: {[model.points[point].demand for point in route.point_ids]},"
f"route total demand = {sum([model.points[point].demand for point in route.point_ids])}.")
print(f"Load : {route.cap_consumption}.\n")
And I found a few routes with total demand larger than 200 (which is my vehicle capacity). For example: Route No.: 68. Ids : [0, 176, 177, 178, 179, 180, 169, 0]. Points demand: [0, 60, 60, 20, 20, 30, 30, 0],route total demand = 220. Load : [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0].
Also notice the cap_consumption variable are all 0, which, based on my experiences running other instances, should be demand of the nodes in the route.
Great that the new version will come out soon. But would really appreciate it if you can help look into this issue. It has been a great learning experience for me.
Thanks, Tao
Dear Tao,
Thank you for taking the time to test our package. We have released a new version of Bapcod. So you can fill out the form here to get the new package binaries. I hope this version will solve your issue.
Best regards, Najib
Dear Najib,
Thank you for releasing the new version. I can't wait to test it out. I will close the issue later if the new version solves this issue.
Best, Tao
Dear Najib and Ruslan,
I have tested the new version and comfirm that coping over the new libbapcod-shared.so file to VRPSolverEasy does solve the original issue.
I will close this issue now. Thank you both for your help.
Best regards, Tao
Hello,
I am running a pretty large VRPTW instance. And from time to time, I came across this issue. It seems the model stops running after showing this message. For example, "Bucket graph for G_1 is regenerated as bucket steps were adjusted for 861 vertices"
What does this message mean? Does this mean the graph is too large? The full log is attached below.
Thank you and have a good time! Tao