GraphSAINT / GraphSAINT

[ICLR 2020; IPDPS 2019] Fast and accurate minibatch training for deep GNNs and large graphs (GraphSAINT: Graph Sampling Based Inductive Learning Method).
https://openreview.net/forum?id=BJe8pkHFwS
MIT License
468 stars 88 forks source link

Cannot take length of Shape with unknown rank #28

Open wohlbier opened 3 years ago

wohlbier commented 3 years ago

Hi, I get ValueError: Cannot take the length of Shape with unknown rank on this line https://github.com/GraphSAINT/GraphSAINT/blob/a051742ff2de4094c97eb523d7108a4fc1d22739/graphsaint/tensorflow_version/model.py#L128

I got around it by commenting out the if check and executing the body of the clause. Have you seen this before? Thanks.

tedzhouhk commented 3 years ago

Hi wohlbier, which version of tensorflow are you using? I tried 1.13 and 1.15, both version seems to be working. Thanks

wohlbier commented 3 years ago

Huh. I was using 1.15. Details below.

conda create -n graphsaint_env
conda activate graphsaint_env
conda install \
      cython==0.29.21 \
      pyyaml==5.3.1 \
      scikit-learn==0.23.2 \
      tensorflow==1.15.0
python graphsaint/setup.py build_ext --inplace
(graphsaint_env) [jgwohlbier@etc-gpu-09 GraphSAINT]$ python -m graphsaint.tensorflow_version.train --data_prefix /srv/scratch/ogb/datasets/nodeproppred/ogbn_products/GraphSAINT --train_config ./train_config/open_graph_benchmark/ogbn-products_3_e_gat.yml --gpu -1
WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/train.py:243: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.

Loading training data..
Done loading training data..
/srv/scratch/jgwohlbier/GraphSAINT/graphsaint/utils.py:190: RuntimeWarning: divide by zero encountered in true_divide
  norm_diag = sp.dia_matrix((1/D,0),shape=diag_shape)
WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/train.py:63: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W1229 11:31:15.493600 139756961830720 module_wrapper.py:139] From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/train.py:63: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/train.py:66: The name tf.sparse_placeholder is deprecated. Please use tf.compat.v1.sparse_placeholder instead.

W1229 11:31:15.496000 139756961830720 module_wrapper.py:139] From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/train.py:66: The name tf.sparse_placeholder is deprecated. Please use tf.compat.v1.sparse_placeholder instead.

WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/model.py:50: The name tf.SparseTensorValue is deprecated. Please use tf.compat.v1.SparseTensorValue instead.

W1229 11:31:35.460009 139756961830720 module_wrapper.py:139] From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/model.py:50: The name tf.SparseTensorValue is deprecated. Please use tf.compat.v1.SparseTensorValue instead.

WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/model.py:59: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.

W1229 11:31:35.460423 139756961830720 module_wrapper.py:139] From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/model.py:59: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.

WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/layers.py:207: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

W1229 11:31:35.460585 139756961830720 module_wrapper.py:139] From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/layers.py:207: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/inits.py:23: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

W1229 11:31:35.460852 139756961830720 module_wrapper.py:139] From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/inits.py:23: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

>> layer attentionaggregator_1, dim: [100,256]
>> layer attentionaggregator_2, dim: [256,256]
>> layer attentionaggregator_3, dim: [256,256]
WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/layers.py:246: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
W1229 11:31:35.758664 139756961830720 deprecation.py:506] From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/layers.py:246: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/layers.py:273: The name tf.sparse_tensor_dense_matmul is deprecated. Please use tf.sparse.sparse_dense_matmul instead.

W1229 11:31:35.801062 139756961830720 module_wrapper.py:139] From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/layers.py:273: The name tf.sparse_tensor_dense_matmul is deprecated. Please use tf.sparse.sparse_dense_matmul instead.

>> layer highorderaggregator_1, dim: [256,47]
WARNING:tensorflow:From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/model.py:127: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See `tf.nn.softmax_cross_entropy_with_logits_v2`.

W1229 11:31:37.255784 139756961830720 deprecation.py:323] From /srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/model.py:127: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See `tf.nn.softmax_cross_entropy_with_logits_v2`.

Traceback (most recent call last):
  File "/srv/scratch/packages/spack/opt/spack/linux-rhel8-skylake_avx512/gcc-8.3.1/anaconda3-2020.07-weugqkfkxd6zmn2irm7lpmujzczwebiw/envs/ogb_test_env/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/srv/scratch/packages/spack/opt/spack/linux-rhel8-skylake_avx512/gcc-8.3.1/anaconda3-2020.07-weugqkfkxd6zmn2irm7lpmujzczwebiw/envs/ogb_test_env/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/train.py", line 243, in <module>
    tf.app.run(main=train_main)
  File "/srv/scratch/packages/spack/opt/spack/linux-rhel8-skylake_avx512/gcc-8.3.1/anaconda3-2020.07-weugqkfkxd6zmn2irm7lpmujzczwebiw/envs/ogb_test_env/lib/python3.7/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/srv/scratch/packages/spack/opt/spack/linux-rhel8-skylake_avx512/gcc-8.3.1/anaconda3-2020.07-weugqkfkxd6zmn2irm7lpmujzczwebiw/envs/ogb_test_env/lib/python3.7/site-packages/absl/app.py", line 303, in run
    _run_main(main, args)
  File "/srv/scratch/packages/spack/opt/spack/linux-rhel8-skylake_avx512/gcc-8.3.1/anaconda3-2020.07-weugqkfkxd6zmn2irm7lpmujzczwebiw/envs/ogb_test_env/lib/python3.7/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/train.py", line 238, in train_main
    model,minibatch,sess,train_stat,ph_misc_stat,summary_writer = prepare(train_data,train_params,arch_gcn)
  File "/srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/train.py", line 98, in prepare
    feats, arch_gcn, train_params, adj_full_norm, logging=True)
  File "/srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/model.py", line 66, in __init__
    self.build()
  File "/srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/model.py", line 104, in build
    self._loss()
  File "/srv/scratch/jgwohlbier/GraphSAINT/graphsaint/tensorflow_version/model.py", line 128, in _loss
    if len(self.loss_terms.shape) == 1:
  File "/srv/scratch/packages/spack/opt/spack/linux-rhel8-skylake_avx512/gcc-8.3.1/anaconda3-2020.07-weugqkfkxd6zmn2irm7lpmujzczwebiw/envs/ogb_test_env/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 827, in __len__
    raise ValueError("Cannot take the length of shape with unknown rank.")
ValueError: Cannot take the length of shape with unknown rank.
tedzhouhk commented 3 years ago

Thanks for the information! It seems that with softmax cross entropy loss, tensorflow cannot infer the shape of self.loss_terms. I have replace it with self.loss_terms.shape.ndims. It should be working now for both softmax and sigmoid loss.