Closed exitclear closed 1 year ago
The dimension of the output is always fixed to the link_state_dim. However, for different topologies there can be different number of links. Therefore, with "None" we indicate that the dimension is variable. If you are sure that you are always going to have 20 links, you should be able to fix the value to 20.
The dimension of the output is always fixed to the link_state_dim. However, for different topologies there can be different number of links. Therefore, with "None" we indicate that the dimension is variable. If you are sure that you are always going to have 20 links, you should be able to fix the value to 20.
Thanks for your help. Can I ask you again why the readout layer's output is shape=(None, 1)? Can I adjust the model structure so that the final Q value dimension is fixed?
The shape 1 indicate that the readout outputs a single value (i.e., the q-value), that's why it's set to 1. To avoid executing the gnn for each K-path multiple times, we create an hypergraph that contains K graphs that are not connected between them. Then, the GNN will output a q-value per graph and we choose the suitable one, which will represent the action to perform. That's why you see the shape (None, 1), because None indicates that there are multiple q-values (i.e., one per graph from the hypergraph).
Hello Thank you for your work..
hello, I want to ask why the tensor dimension output in line 59 of mnpp.py is (NONE, 20)? Can I change the dimension to fixed?