snap-stanford / ogb

Benchmark datasets, data loaders, and evaluators for graph machine learning
https://ogb.stanford.edu
MIT License
1.89k stars 397 forks source link

DGL Loader does not work with TF backend #302

Closed kaansancak closed 2 years ago

kaansancak commented 2 years ago

I am not sure whether it is intentional but I noticed that OGB's DGL loader only works with DGL's Pytorch backend, however, they also offer TF and mxnet backends as well. In my case, the following line with TF backend will throw and error for getting data splits: dataset = DglNodePropPredDataset(name = "ogbn-products").

weihua916 commented 2 years ago

Hi! What is the exact error?

kaansancak commented 2 years ago

It seems like it is trying to use torch backend even though DGL backend is set to tensorflow.

AttributeError                            Traceback (most recent call last)
/var/folders/31/qs4f98md6nb969dgsnz8bhtm0000gn/T/ipykernel_54417/3979238778.py in <module>
      1 # Loading with pytorch backend as tensorflow backend does not work with ogbn datasets
      2 # !export DGLBACKEND=pytorch
----> 3 dataset = DglNodePropPredDataset(name = "ogbn-products")

~/site-packages/ogb/nodeproppred/dataset_dgl.py in __init__(self, name, root, meta_dict)
     67         super(DglNodePropPredDataset, self).__init__()
     68 
---> 69         self.pre_process()
     70 
     71     def pre_process(self):

~/site-packages/ogb/nodeproppred/dataset_dgl.py in pre_process(self)
    153 
    154             else:
--> 155                 graph = read_graph_dgl(raw_dir, add_inverse_edge = add_inverse_edge, additional_node_files = additional_node_files, additional_edge_files = additional_edge_files, binary=self.binary)[0]
    156 
    157                 ### adding prediction target

~/site-packages/ogb/io/read_graph_dgl.py in read_graph_dgl(raw_dir, add_inverse_edge, additional_node_files, additional_edge_files, binary)
     27 
     28         if graph['node_feat'] is not None:
---> 29             g.ndata['feat'] = torch.from_numpy(graph['node_feat'])
     30 
     31         for key in additional_node_files:

~/site-packages/dgl/view.py in __setitem__(self, key, val)
     79                 'The HeteroNodeDataView has only one node type. ' \
     80                 'please pass a tensor directly'
---> 81             self._graph._set_n_repr(self._ntid, self._nodes, {key : val})
     82 
     83     def __delitem__(self, key):

~/site-packages/dgl/heterograph.py in _set_n_repr(self, ntid, u, data)
   3992                 raise DGLError('Expect number of features to match number of nodes (len(u)).'
   3993                                ' Got %d and %d instead.' % (nfeats, num_nodes))
-> 3994             if F.context(val) != self.device:
   3995                 raise DGLError('Cannot assign node feature "{}" on device {} to a graph on'
   3996                                ' device {}. Call DGLGraph.to() to copy the graph to the'

~/site-packages/dgl/backend/tensorflow/tensor.py in context(input)
    124 
    125 def context(input):
--> 126     spec = tf.DeviceSpec.from_string(input.device)
    127     return "/{}:{}".format(spec.device_type.lower(), spec.device_index)
    128 

~/site-packages/tensorflow/python/framework/device_spec.py in from_string(cls, spec)
    160       A DeviceSpec.
    161     """
--> 162     return cls(*cls._string_to_components(spec))
    163 
    164   def parse_from_string(self, spec):

~/site-packages/tensorflow/python/framework/device_spec.py in _string_to_components(spec)
    337 
    338     spec = spec or ""
--> 339     splits = [x.split(":") for x in spec.split("/")]
    340     valid_device_types = DeviceSpecV2._get_valid_device_types()
    341     for y in splits:

AttributeError: 'torch.device' object has no attribute 'split'
weihua916 commented 2 years ago

That's right. We do not have tensorflow support for DGL. You need to use the library-agnostic dataset object.

kaansancak commented 2 years ago

That is what I am currently doing. Just wanted to make sure that it is intentional and not a bug, if so, we can close the issue. Thanks!