derenlei / KG-RuleGuider

Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning (EMNLP 2020)
MIT License
39 stars 6 forks source link

IndexError: tensors used as indices must be long, byte or bool tensors #3

Closed davidlvxin closed 3 years ago

davidlvxin commented 3 years ago

when I pre-train relation agent using top rules with the following command

./experiment-pretrain.sh configs/<dataset>-rs.sh --train <gpu-ID> <rule-path> --model point.rs.<embedding-model>

I get the following error information.

File "RuleGuider/src/rl/graph_search/pn.py", line 213, in <listcomp> 
    new_tuple = tuple([_x[:, offset, :] for _x in x])
IndexError: tensors used as indices must be long, byte or bool tensors

It seems that action_beam_offset in the top_k_action function in beam_search.py is float type. This is obviously going to raise an error because the index cannot be float type. Could you fix this bug?

    def top_k_action(log_action_dist, action_space):
        """
        Get top k actions.
            - k = beam_size if the beam size is smaller than or equal to the beam action space size
            - k = beam_action_space_size otherwise
        :param log_action_dist: [batch_size*k, action_space_size]
        :param action_space (r_space, e_space):
            r_space: [batch_size*k, action_space_size]
            e_space: [batch_size*k, action_space_size]
        :return:
            (next_r, next_e), log_action_prob, action_offset: [batch_size*new_k]
        """
        full_size = len(log_action_dist)
        assert (full_size % batch_size == 0)
        last_k = int(full_size / batch_size)

        (r_space, e_space), _ = action_space
        action_space_size = r_space.size()[1]
        # => [batch_size, k'*action_space_size]
        log_action_dist = log_action_dist.view(batch_size, -1)
        beam_action_space_size = log_action_dist.size()[1]
        k = min(beam_size, beam_action_space_size)
        # [batch_size, k]
        log_action_prob, action_ind = torch.topk(log_action_dist, k)
        next_r = ops.batch_lookup(r_space.view(batch_size, -1), action_ind).view(-1)
        next_e = ops.batch_lookup(e_space.view(batch_size, -1), action_ind).view(-1)
        # [batch_size, k] => [batch_size*k]
        log_action_prob = log_action_prob.view(-1)
        # compute parent offset
        # [batch_size, k]
        action_beam_offset = action_ind / action_space_size
        # [batch_size, 1]
        action_batch_offset = int_var_cuda(torch.arange(batch_size) * last_k).unsqueeze(1)
        # [batch_size, k] => [batch_size*k]
        print(action_beam_offset)
        print(action_batch_offset)
        action_offset = (action_batch_offset + action_beam_offset).view(-1)
        return (next_r, next_e), log_action_prob, action_offset
davidlvxin commented 3 years ago

In the new version of PyTorch, integer division should be done with //.

action_beam_offset = action_ind / action_space_size

in function top_k_action and top_k_action_rneeds to be changed to

action_beam_offset = action_ind // action_space_size
LuMflowers commented 3 years ago

How can you find the rule path?

davidlvxin commented 3 years ago

I write codes to generate them by myself.

LuMflowers commented 3 years ago

Could you please show me the file or the format of the rules, since I have run the code in the anyBURL mentioned in this paper, but I found out the format of the rule file didn't match. I wonder if I made something wrong.

WeidongLi-KG commented 6 months ago

I write codes to generate them by myself.

Can you share the code for generating rules?