stellargraph / stellargraph

StellarGraph - Machine Learning on Graphs
https://stellargraph.readthedocs.io/
Apache License 2.0
2.95k stars 431 forks source link

Spatio-temporal forecast using GCN and LSTM #1852

Open Chethan-Babu-stack opened 3 years ago

Chethan-Babu-stack commented 3 years ago

In the code, I want to use the same adjacency matrix data for graph and change the speed dataset to have [speed, covid_cases_in that_place_at_that_time].

For instance, to check the traffic flow with speed and covid cases.

Please suggest how I could do this.

PS: I was thinking to encode two values into one and then use the same code. I'm not sure how exactly I can encode, may be using some weights to both.

huonw commented 3 years ago

Hi, there's some support for multi-variate time series forecasting in GCN-LSTM, implemented in https://github.com/stellargraph/stellargraph/pull/1580. Unfortunately it's still very early days for this support:

I think you may be able to get something to work with (following your example):

  1. load the data into a StellarGraph as a tensor of shape [number of nodes, number of time steps, number of observations per node per time step (2 in this case)]. You can see one way to do this at https://stellargraph.readthedocs.io/en/stable/demos/basics/loading-numpy.html#Homogeneous-graph-with-non-numeric-IDs-and-feature-tensors
  2. create a model that outputs a prediction of speed and covid cases given a time series containing speed and covid cases, using GCN_LSTM and SlidingFeaturesNodeGenerator
  3. define your loss function appropriately (for instance, maybe you only care about the predictions of speed, not of covid cases, and so have to use a loss (e.g. MAE) that only looks at the x[..., 0] data, and ignores the x[..., 1] data)

You may've already seen the example doing univariate (that is, one observation per node per time step) prediction at https://stellargraph.readthedocs.io/en/stable/demos/time-series/gcn-lstm-time-series.html, which might be a good place to start (although it unfortunately doesn't use SlidingFeaturesNodeGenerator yet).

Does that help get you started?

Chethan-Babu-stack commented 3 years ago

Hi, Thanks a lot for your response.

I tried as per your suggestion. I get an error as below when I try to train the model.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 58 ctx.ensure_initialized() 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: 62 if name is not None:

UnimplementedError: Cast string to float is not supported [[node model/Cast (defined at :8) ]] [Op:__inference_train_function_6028]

Do you have any suggestions on how I can overcome this?

Thanks again :)

huonw commented 3 years ago

I'm glad you're able to make a little bit of progress! Unfortunately that's not nearly enough information for me to help. People like me can help you better in questions/bug reports if you include:

From the error message it sounds like you have string values somewhere numbers are expected, so the first thing I would do is print out information about types in various numpy arrays and/or TensorFlow tensors (for example, print(some_tensor.dtype)) and StellarGraph graphs (for example, print(some_stellargraph.info())).

Chethan-Babu-stack commented 3 years ago

Hi, I really appreciate your help.

I have attached a google colab with the code changes I have made so far.

Also, attached are two datasets that have to be loaded in the 7th step of Google colab. The dataset has two CSV files, one is the adjacency matrix and the other is the feature matrix of time series data.

As suggested by you I have created numpy arrays in 16th step(method: sequence_data_preparation) of Google colab. The issue is shown in model.fit

Hope this would explain it in detail.

Thanks once again. Multivariate time series TGCN.zip

huonw commented 3 years ago

Great. Unfortunately I'm busy today and so won't be able to get to it until Monday next week (I've set a reminder). Please leave an update if you debug anything more in the mean time.

huonw commented 3 years ago

Hi @Chethan-Babu-stack, unfortunately I couldn't reproduce the issue you described above. I tried running it per https://colab.research.google.com/drive/10IlIlDVJxUARoWBh4ZfFO-Ph1AAcdotr?usp=sharing (I inlined the datasets so that the notebook is standalone), and saw:

WARNING:tensorflow:Model was constructed with shape (None, 10, 10) for input KerasTensor(type_spec=TensorSpec(shape=(None, 10, 10), dtype=tf.float32, name='input_9'), name='input_9', description="created by layer 'input_9'"), but it was called on an input with incompatible shape (None, 10, 10, 3).

...

ValueError: in user code:

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function  *
        return step_function(self, iterator)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step  **
        outputs = model.train_step(data)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:754 train_step
        y_pred = self(x, training=True)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:1012 __call__
        outputs = call_fn(inputs, *args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py:425 call
        inputs, training=training, mask=mask)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py:560 _run_internal_graph
        outputs = node.layer(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:1012 __call__
        outputs = call_fn(inputs, *args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py:557 call
        result.set_shape(self.compute_output_shape(inputs.shape))
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py:548 compute_output_shape
        self.target_shape)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py:536 _fix_unknown_dimension
        raise ValueError(msg)

    ValueError: total size of new array must be unchanged, input_shape = [10, 10, 3, 1], output_shape = [10, 10]

Please let me know if this is the error you're struggling with, or provide an updated link that reproduces the problem (see https://research.google.com/colaboratory/faq.html "Where are my notebooks stored, and can I share them?" for details).

Chethan-Babu-stack commented 3 years ago

Hi @huonw Yes, this the point where I'm stuck now(the ValueError: total size of new array must be unchanged, input_shape = [10, 10, 3, 1], output_shape = [10, 10]). It is fine to share it :) Thanks, Chethan

huonw commented 3 years ago

Ah, I really would've appreciated if you'd noted that the error was different to the one in https://github.com/stellargraph/stellargraph/issues/1852#issuecomment-776763770, because it means I can help you better.

As I hinted above, the only way to do multi-variate GCN-LSTM at the moment is using SlidingFeaturesNodeGenerator. You'll need to turn the speed_data data frame into an IndexedArray https://stellargraph.readthedocs.io/en/stable/api.html#stellargraph.IndexedArray of shape 10 (for the nodes) × 699 (for the time stamps) × 3 (for the observations), load this into a StellarGraph and use SlidingFeaturesNodeGenerator

For instance:

import json

# convert nested strings to one big numpy array
speed_array =  np.array([[json.loads(s)  for s in row] for row in speed_data.to_numpy()], dtype=float)
print(speed_array.shape) # (10, 699, 3)

node_features = sg.IndexedArray(speed_array, index=speed_data.index)

train_size = ... # number of samples to use for training

graph = StellarGraph(node_features, ...)

generator = SlidingFeaturesNodeGenerator(graph, 10, batch_size=60)
train_gen = generator.flow(slice(0, train_size), target_distance=1)
test_gen = generator.flow(slice(train_size, None), target_distance=1)

gcn_lstm = GCN_LSTM(None, None, ..., generator=generator)

...

model.fit(train_gen, validation_data=test_gen)

...

Unfortunately there's no example of this, but you can see how the tests do it at https://github.com/stellargraph/stellargraph/blob/1e6120fcdbbedd3eb58e8fecc0eabc6999101ee6/tests/layer/test_gcn_lstm.py#L192-L217


Another option would be to add a variates argument to the GCN_LSTM constructor, to allow the manual version to do multi-variate prediction too.

https://github.com/stellargraph/stellargraph/blob/1e6120fcdbbedd3eb58e8fecc0eabc6999101ee6/stellargraph/layer/gcn_lstm.py#L246-L279

I will merge a pull request that:


Does that clarify things?

evancollins1 commented 2 years ago

Hi @huonw, referencing your above comments, I have created a two-output model from data with dimensions 18 nodes by 216000 time stamps by 2 output observations (1 continuous, 1 categorical).

The resulting model is as follows:

Screen Shot 2021-12-21 at 11 32 39 AM

The last reshape layer has dimensions (None, 18, 2), with the 2 reflecting the continuous (,,0) and categorical (,,1) outputs.

I would like to define different loss functions (i.e., mae and binary_crossentropy) for these two outputs. I could not seem to find a way to call each of these two dimensions individually when writing the loss functions in model.compile.

Any suggestions?

mdislam11 commented 1 year ago

Ah, I really would've appreciated if you'd noted that the error was different to the one in #1852 (comment), because it means I can help you better.

As I hinted above, the only way to do multi-variate GCN-LSTM at the moment is using SlidingFeaturesNodeGenerator. You'll need to turn the speed_data data frame into an IndexedArray https://stellargraph.readthedocs.io/en/stable/api.html#stellargraph.IndexedArray of shape 10 (for the nodes) × 699 (for the time stamps) × 3 (for the observations), load this into a StellarGraph and use SlidingFeaturesNodeGenerator

For instance:

import json

# convert nested strings to one big numpy array
speed_array =  np.array([[json.loads(s)  for s in row] for row in speed_data.to_numpy()], dtype=float)
print(speed_array.shape) # (10, 699, 3)

node_features = sg.IndexedArray(speed_array, index=speed_data.index)

train_size = ... # number of samples to use for training

graph = StellarGraph(node_features, ...)

generator = SlidingFeaturesNodeGenerator(graph, 10, batch_size=60)
train_gen = generator.flow(slice(0, train_size), target_distance=1)
test_gen = generator.flow(slice(train_size, None), target_distance=1)

gcn_lstm = GCN_LSTM(None, None, ..., generator=generator)

...

model.fit(train_gen, validation_data=test_gen)

...

Unfortunately there's no example of this, but you can see how the tests do it at

https://github.com/stellargraph/stellargraph/blob/1e6120fcdbbedd3eb58e8fecc0eabc6999101ee6/tests/layer/test_gcn_lstm.py#L192-L217

Another option would be to add a variates argument to the GCN_LSTM constructor, to allow the manual version to do multi-variate prediction too.

https://github.com/stellargraph/stellargraph/blob/1e6120fcdbbedd3eb58e8fecc0eabc6999101ee6/stellargraph/layer/gcn_lstm.py#L246-L279

I will merge a pull request that:

  • adds a variates=None argument
  • removes the else branch
  • updates the documentation
  • (optionally) adds a test

Does that clarify things?

while trying this instruction, I came across this error. Any suggestions?

InvalidArgumentError Traceback (most recent call last) Cell In[370], line 1 ----> 1 model.fit(train_gen, validation_data=test_gen)

File ~\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback..error_handler(*args, **kwargs) 67 filtered_tb = _process_traceback_frames(e.traceback) 68 # To get the full stack trace, call: 69 # tf.debugging.disable_traceback_filtering() ---> 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb

File ~\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\tensorflow\python\eager\execute.py:52, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 50 try: 51 ctx.ensure_initialized() ---> 52 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, 53 inputs, attrs, num_outputs) 54 except core._NotOkStatusException as e: 55 if name is not None:

InvalidArgumentError: Graph execution error:

Detected at node 'mean_absolute_error/remove_squeezable_dimensions/Squeeze' defined at (most recent call last): File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\ipykernel_launcher.py", line 17, in app.launch_new_instance() File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\traitlets\config\application.py", line 992, in launch_instance app.start() File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\ipykernel\kernelapp.py", line 711, in start self.io_loop.start() File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\tornado\platform\asyncio.py", line 215, in start self.asyncio_loop.run_forever() File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\asyncio\base_events.py", line 570, in run_forever self._run_once() File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\asyncio\base_events.py", line 1859, in _run_once handle._run() File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\asyncio\events.py", line 81, in _run self._context.run(self._callback, self._args) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\ipykernel\kernelbase.py", line 510, in dispatch_queue await self.process_one() File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\ipykernel\kernelbase.py", line 499, in process_one await dispatch(args) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\ipykernel\kernelbase.py", line 406, in dispatch_shell await result File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\ipykernel\kernelbase.py", line 729, in execute_request reply_content = await reply_content File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\ipykernel\ipkernel.py", line 411, in do_execute res = shell.run_cell( File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\ipykernel\zmqshell.py", line 531, in run_cell return super().run_cell(*args, *kwargs) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\IPython\core\interactiveshell.py", line 2945, in run_cell result = self._run_cell( File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\IPython\core\interactiveshell.py", line 3000, in _run_cell return runner(coro) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\IPython\core\async_helpers.py", line 129, in _pseudo_sync_runner coro.send(None) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\IPython\core\interactiveshell.py", line 3203, in run_cell_async has_raised = await self.run_ast_nodes(code_ast.body, cell_name, File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\IPython\core\interactiveshell.py", line 3382, in run_ast_nodes if await self.runcode(code, result, async=asy): File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\IPython\core\interactiveshell.py", line 3442, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "C:\Users\islam70\AppData\Local\Temp\ipykernel_26312\2390136114.py", line 1, in model.fit(train_gen, validation_data=test_gen) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler return fn(args, **kwargs) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\engine\training.py", line 1650, in fit tmp_logs = self.train_function(iterator) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\engine\training.py", line 1249, in train_function return step_function(self, iterator) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\engine\training.py", line 1233, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\engine\training.py", line 1222, in run_step outputs = model.train_step(data) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\engine\training.py", line 1024, in train_step loss = self.compute_loss(x, y, y_pred, sample_weight) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\engine\training.py", line 1082, in compute_loss return self.compiled_loss( File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\engine\compile_utils.py", line 265, in call loss_value = loss_obj(y_t, y_p, sample_weight=sw) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\losses.py", line 152, in call losses = call_fn(y_true, y_pred) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\losses.py", line 277, in call y_pred, y_true = losses_utils.squeeze_or_expand_dimensions( File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\utils\losses_utils.py", line 200, in squeeze_or_expand_dimensions y_true, y_pred = remove_squeezable_dimensions(y_true, y_pred) File "C:\Users\islam70\Anaconda3\envs\tensorflow2_p3_8\lib\site-packages\keras\utils\losses_utils.py", line 139, in remove_squeezable_dimensions labels = tf.squeeze(labels, [-1]) Node: 'mean_absolute_error/remove_squeezable_dimensions/Squeeze' Can not squeeze dim[2], expected a dimension of 1, got 2 [[{{node mean_absolute_error/remove_squeezable_dimensions/Squeeze}}]] [Op:__inference_train_function_154301]

aanxud888 commented 7 months ago

I have come up with a solution for the situation of multivariable input