FeiLiu36 / MTNCO

multi-task learning for routing problem
MIT License
13 stars 6 forks source link

Bug about shape unalignment of `route_open` #1

Open lzr123 opened 2 months ago

lzr123 commented 2 months ago

This code has a bug. It will report an error when POMO_size and problem_size are not equal. Here is the error information:

RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 50 but got size 100 for tensor number 4 in the list.

The error is reported by following line:

input_cat = torch.cat([encoded_last_node, load[:, :, None], time[:, :, None],
                               length[:, :, None], route_open[:, :, None]], dim=2)

at line 244 of VRPModel.py. This error seems to be caused by shape of route_open can not be aligned with other matrices in torch.cat. The route_open has shape (batch_size, problem_size) while other matrices have shape (batch_size, POMO_size). The shape unalignment may be caused by forgetting to reset the shape of route_open in VRPEnv.reset().

    def reset(self):
        self.selected_count = 0
        self.current_node = None
        # shape: (batch, pomo)
        self.selected_node_list = torch.zeros((self.batch_size, self.pomo_size, 0), dtype=torch.long)
        # shape: (batch, pomo, 0~)

        self.at_the_depot = torch.ones(size=(self.batch_size, self.pomo_size), dtype=torch.bool)
        # shape: (batch, pomo)
        self.load = torch.ones(size=(self.batch_size, self.pomo_size))
        # shape: (batch, pomo)
        self.time = torch.zeros(size=(self.batch_size, self.pomo_size))
        # shape: (batch, pomo)
        self.length = 3.0 * torch.ones(size=(self.batch_size, self.pomo_size))
        # # shape: (batch, pomo)
        self.visited_ninf_flag = torch.zeros(size=(self.batch_size, self.pomo_size, self.problem_size + 1))
        # shape: (batch, pomo, problem+1)
        self.ninf_mask = torch.zeros(size=(self.batch_size, self.pomo_size, self.problem_size + 1))
        # shape: (batch, pomo, problem+1)
        self.finished = torch.zeros(size=(self.batch_size, self.pomo_size), dtype=torch.bool)
        # shape: (batch, pomo)

#### HERE THE `self.route_open` IS NOT SET TO `(self.batch_size, self.pomo_size)`
        reward = None
        done = False
        return self.reset_state, reward, done

I sincerely hope the author fix this bug so that other people can follow this work easiler.

FeiLiu36 commented 1 month ago

Thank you for bringing this to our attention!

We have updated the revision and it can now accommodate various POMO sizes.