I have a num_node x num_feature matrix (node features) and a num_node x num_node matrix (graph).
To train a GNN, I used the same approach as in the example : citation_gcn.py
I thus used SingleLoader, which does not allow training on minibatch (with batch_size parameter) but instead does training on full batch.
The problem is that training on full batch on my data does not provide good results. Minibatch is known to converge to a better optimum than full batch.
Hence I would like to use batches, but DisjointLoader, BatchLoader or MixedLoader do not fit as far as I understood. I only have one graph and one feature matrix.
Dear Daniele,
I have a num_node x num_feature matrix (node features) and a num_node x num_node matrix (graph).
To train a GNN, I used the same approach as in the example : citation_gcn.py I thus used SingleLoader, which does not allow training on minibatch (with batch_size parameter) but instead does training on full batch.
The problem is that training on full batch on my data does not provide good results. Minibatch is known to converge to a better optimum than full batch.
Hence I would like to use batches, but DisjointLoader, BatchLoader or MixedLoader do not fit as far as I understood. I only have one graph and one feature matrix.
Do you know any solution for this problem?
Regards Raphaël