Open cbhua opened 3 months ago
After some experiments with the original GLOP implementation, I found that none of the current discrepancies (Maximum number of vehicles constraint; Polar coordinates embedding; Sparsify the input graph; ...) should be the reason for training failure on CVRP100. It's weird @Furffico
Could you push the latest version of GLOP?
Could you push the latest version of GLOP?
We were experimenting with the debug version, which involves a part of code from the official GLOP implementation. The "pure RL4CO" version remains not learning and still requires some debugging.
I see, given it's still in RL4CO (just not totally refactored) I'd suggest merging it and when the pure RL4CO version is ready that will be merged.
What do you think?
Cc: @cbhua
As the GLOP worked at the submission version. We will clean up this branch and push a clean final implementation of it then.
Description
This PR adds the implementation of Global and Local Optimization Policies (GLOP), together with the implementation of Shortest Hamiltonian Path Problem (SHPP) environment.
Motivation and Context
GLOP is an important non-autoregressive (NAR) model for routing problems. For more details, please refer to the original paper.
Types of changes
Checklist
⚠️ Working on Debugging
The current implementation of GLOP is runnable but it can not learn.
I added one test notebook at the
examples/other/3-glop.ipynb
. This notebook including the test for SHPP environment, greedy rollout for untrained GLOP policy (including visualization for a better understanding), and launching the training for the GLOP. Please play with it and have a look.There are following components not implemented yet compare with the original GLOP:
I will add these missing parts soon. And here are some possible ideas to help to reproduce the results:
If @henry-yeh @Furffico have time, could you help to have a look about the implementation? We need to reproduce the GLOP's result close recently.