Closed Xiaodi-Li closed 5 years ago
This is due to some incompatibility between py2 vs py3, and pytorch with 0.4 and later vs pytorch == 0.3.1. I've fixed this issue, and please pull the latest version: 23b252231f98b674088c8c345764375ac9da4f0a
Hi, I've downloaded the latest version while still get the same error. Can you check that? Thanks.
Sorry I was testing mean_field mode. Please check the latest version and see whether it works now.
It works now, thanks a lot.
Hi, I am still facing the same issue as I am trying to port it to gColab. It happens in both modes for graph_classification. Colab runs on Python 3.6.9 and PyTorch 1.7.0. I have also tried it on PyTorch 1.7.1.
If you need it, I can provide the notebook. This is the output with the stack trace:
loading data
# classes: 2
# node features: 37
# train: 3699, # test: 411
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-15-b7028a3fa018> in <module>()
6 print('# train: %d, # test: %d' % (len(train_graphs), len(test_graphs)))
7
----> 8 classifier = Classifier()
9 if mode == 'gpu':
10 classifier = classifier.cuda()
4 frames
<ipython-input-13-718dbb409eca> in __init__(self)
17 num_node_feats=feat_dim,
18 num_edge_feats=0,
---> 19 max_lv=max_lv)
20 local_out_dim = output_dim
21 if local_out_dim == 0:
/content/pytorch_structure2vec/s2v_lib/embedding.py in __init__(self, latent_dim, output_dim, num_node_feats, num_edge_feats, max_lv)
26 self.max_lv = max_lv
27
---> 28 self.w_n2l = nn.Linear(num_node_feats, latent_dim)
29 if num_edge_feats > 0:
30 self.w_e2l = nn.Linear(num_edge_feats, latent_dim)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py in __init__(self, in_features, out_features, bias)
81 else:
82 self.register_parameter('bias', None)
---> 83 self.reset_parameters()
84
85 def reset_parameters(self) -> None:
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py in reset_parameters(self)
84
85 def reset_parameters(self) -> None:
---> 86 init.kaiming_uniform_(self.weight, a=math.sqrt(5))
87 if self.bias is not None:
88 fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
/usr/local/lib/python3.6/dist-packages/torch/nn/init.py in kaiming_uniform_(tensor, a, mode, nonlinearity)
379 fan = _calculate_correct_fan(tensor, mode)
380 gain = calculate_gain(nonlinearity, a)
--> 381 std = gain / math.sqrt(fan)
382 bound = math.sqrt(3.0) * std # Calculate uniform bounds from standard deviation
383 with torch.no_grad():
ZeroDivisionError: float division by zero
Thank you
Hi,
The code was developed for python 2, and python 3 has different division behaviors.
Ok, thanks.
Btw the code works fine also on gColab running it as a script, but once I rearrange it in a notebook it crashes. No worries, I will sort it out, thanks again :)
====== begin of s2v configuration ====== | msg_average = 0 ====== end of s2v configuration ====== Namespace(batch_size=50, data='MUTAG', feat_dim=0, fold=1, gm='loopy_bp', hidden=100, latent_dim=64, learning_rate=0.0001, max_lv=4, mode='cpu', num_class=0, num_epochs=1000, out_dim=1024, seed=1) loading data
classes: 2
node features: 7
train: 170, # test: 18
Traceback (most recent call last): File "main.py", line 103, in
classifier = Classifier()
File "main.py", line 34, in init
max_lv=cmd_args.max_lv)
File "/home/ase/Graph/pytorch_structure2vec/graph_classification/../s2v_lib/embedding.py", line 89, in init
self.w_e2l = nn.Linear(num_edge_feats, latent_dim)
File "/home/ase/venv_python3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 56, in init
self.reset_parameters()
File "/home/ase/venv_python3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 59, in reset_parameters
init.kaiminguniform(self.weight, a=math.sqrt(5))
File "/home/ase/venv_python3/lib/python3.6/site-packages/torch/nn/init.py", line 290, in kaiminguniform
std = gain / math.sqrt(fan)
ZeroDivisionError: float division by zero
When I run main.py in loopy_bp mode on any data set, it will throw an error like this. Can you tell me how to fix it? Thanks a lot.