If we increase the temporal lookback to >1 month, how might this look?
One possibility is outputting a 3D adjacency matrix (lat, lon, time) which learns temporal as well as spatial connections. This would be super interesting in terms of interpretability as well.
This PR consists of updating the dataset so that 2 timesteps are output in the X component, and updating the adjacency matrix to reflect spatial connections (i.e. in the adjacency learner's static inputs, have t as well as lat and lon).
Ideally, don't hardcode 2 timesteps - make it easy to change.
2 experiments:
[ ] Without explicitly passing the time dimension (as in graphino)
[ ] Explicitly passing the time dimension (either using a temporal-GCN or as described above)
If we increase the temporal lookback to >1 month, how might this look?
One possibility is outputting a 3D adjacency matrix (lat, lon, time) which learns temporal as well as spatial connections. This would be super interesting in terms of interpretability as well.
This PR consists of updating the dataset so that 2 timesteps are output in the
X
component, and updating the adjacency matrix to reflect spatial connections (i.e. in the adjacency learner's static inputs, have t as well as lat and lon).Ideally, don't hardcode 2 timesteps - make it easy to change.
2 experiments: