ai4co / rl4co

A PyTorch library for all things Reinforcement Learning (RL) for Combinatorial Optimization (CO)
https://rl4.co
MIT License
381 stars 70 forks source link

[Feat] Updating the evaluation script #188

Closed cbhua closed 3 months ago

cbhua commented 3 months ago

Description

Updating the rl4co/tasks/eval.py for the latest version. I created this quick merge PR to write down the usage tutorial.

Motivation and Context

Types of changes

Tutorial for the evaluation

Step 1. Prepare your pre-trained model checkpoint and test instances data file. Put them in your preferred place. e.g., we will test the AttentionModel on TSP50:

.
├── rl4co/
│   └── ...
├── checkpoints/
│   └── am-tsp50.ckpt
└── data/
    └── tsp/
        └── tsp50_test_seed1234.npz

Step 2. Run the eval.py with your customized setting. e.g., let's use the sampling method with a top_p=0.95 sampling strategy:

python rl4co/tasks/eval.py --problem tsp --data_path data/tsp/tsp50_test_seed1234.npz --model AttentionModel --ckpt_path checkpoints/am-tsp50.ckpt --method sampling --top_p 0.95

You could check the rl4co/tasks/eval.py to see more supporting parameters with hints. Here are some notes:

Step 3. If you want to launch several evaluations with various parameters, you may refer to the following examples:

🙌 I will update one notebook for loading the results and do some statics soom.