Closed q1anzhen closed 3 years ago
Thanks for your interest and the kind words. :)
Should I prepare some arrays that has been sampled by HealPix? Or should I prepare some panoramic images?
Sampled by HEALPix, or by any other spherical sampling. HEALPix is currently best supported by the code, but other samplings are available and any sampling can be implemented.
I found that the image in the code seems to have only two dimensions, which are vertices and features. I think vertices is the index after sampling the spherical data, and the feature is the channel. Do I understand it right?
That's correct.
If so, how can I transform the (height, width, channel) image into (vertices, feature) data? Or what sampling method should I use to get the (V, F) data from spherical data in the first place?
As the sphere is represented by a graph, vertices can be arbitrarily ordered. You must order your data in a 1D array (or 2D if there are multiple features) such that their position corresponds to the position of the vertex on the sphere.
In general, you can visualize the vertex ordering of a PyGSP graph with the plot
method as mygraph.plot(indices=True)
. For example:
graph = pg.graphs.Grid2d(4)
graph.plot(indices=True)
For HEALPix, we use the standard ring (nest=False
) and nested (nest=True
) orderings, as specified by the paper and used by the healpy package.
Hi @mdeff! Thanks for sharing your awesome work here! Looking forward to more!
I was wondering how exactly deepsphere takes a 2D feature as the example in this repository uses only 1D features.
Also, another question related to that of @q1anzhen is would it be possible to use a Lebedev grid for example and arbitrarily order the vertices of the spherical graph? More specifically, am I required to process the order of the vectorised data sampled on a Lebedev grid (sphere) by providing specific indices or does pyGSP handle that for me?
Thanks again! :)
Hi @xefonon! Thanks for your interest and kind words. :)
I assume that by 2D features you mean two fields/channels/features over the sphere (e.g., temperature and humidity over the Earth). Then that's in an extra dimension. The dimension of the data tensor is [number of fields/channels/features] x [number of pixels/samples over the sphere] x [batch size]
.
It would be possible to use a Lebedev grid (or any grid really). What you'll have to do however is to provide a graph for that grid. In PyGSP, we have graphs for some common spherical grids, though not the Lebedev. What you'll need to do is to provide a graph (in the end, a Laplacian matrix) that represents that grid. Two options:
NNgraph
with the PyGSP).The order of the vertices is arbitrary. You only need to make sure that the graph and the data vector use the same order (i.e., that index j
references the same vertex in both cases).
PS: in general, prefer to open a new issue rather than commenting on an old one (unless it's really a continuation).
Out of curiosity, in which fields (for which data/problems) is that Lebedev grid common? I've never heard about it before.
Hi @mdeff,
Thanks for getting back to me!
I assume that by 2D features you mean two fields/channels/features over the sphere (e.g., temperature and humidity over the Earth). Then that's in an extra dimension. The dimension of the data tensor is [number of fields/channels/features] x [number of pixels/samples over the sphere] x [batch size].
yes thank you, that is what I meant. Does this mean that I can extend this to include a data tensor like a spherical measurement of temperature and humidity (e.g. [ temp, humidity] x [ pixels/spherical samples ] x [batch size ]
)? If so, it is not entirely clear to me seeing as ChebConv
for example takes 3 inputs, those of in_channels
, out_channels
, kernel_size
. Would you vectorise the signal in such a case? Or did you imply that it is not possible to extend the feature space over the sphere?
It would be possible to use a Lebedev grid (or any grid really). What you'll have to do however is to provide a graph for that grid. In PyGSP, we have graphs for some common spherical grids, though not the Lebedev. What you'll need to do is to provide a graph (in the end, a Laplacian matrix) that represents that grid.
How exactly would I provide this to the network once I have constructed the graph? I'm guessing in the form of the graph Laplacian in layers such as SphericalChebConv
, but then again pooling isn't an option with this type of graph I suppose?
PS: in general, prefer to open a new issue rather than commenting on an old one (unless it's really a continuation).
Yes apologies for that, I figured my question was related to this thread so I thought that it would be preferable for you. :) Sorry!
Out of curiosity, in which fields (for which data/problems) is that Lebedev grid common? I've never heard about it before.
With a Lebedev grid, you can get a closed-form solution for spherical harmonic integration up to a certain order so I imagine that it can be used for a variety of spherical signals. I am involved in spherical microphone array processing and I believe that deepsphere has a lot of potential in this field.
Thanks again, I hope I haven't riddled you with my questions! (Edited as previous comment was misphrased)
The data tensor has 3 dimensions as [temp, humidity] x [pixels/spherical samples] x [batch size]
. Each dimension is as large as you want to make it (you can have 2x100x10 or 10x1000x1 it doesn't matter). ChebConv
is a convolutional layer that takes three parameters:
in_channels
is the number of input channels/features, i.e., the size of the first dimension of the data tensor that this layer takes as input. In your case, in_channels=2
for the first layer.out_channels
is the number of output channels/features, i.e., the size of the first dimension of the data tensor that this layer outputs.kernel_size
is the size of the convolutional kernel. (In a classic 2D convolution, you might have heard of 3x3 or 5x5 kernels.)The kernel_size
determines the number of weights/parameters to be learned per filter. There are in_channels
x out_channels
filters to be learned, hence in_channels
x out_channels
x kernel_size
parameters in total for a ChebConv
layer. A ChebConv
layer takes a data tensor of size in_channels
x samples
x batch_size
and spits out a data tensor of size out_channels
x samples
x batch_size
.
How exactly would I provide this to the network once I have constructed the graph? I'm guessing in the form of the graph Laplacian in layers such as SphericalChebConv, but then again pooling isn't an option with this type of graph I suppose?
Indeed, you'd provide one graph Laplacian (a sparse matrix) per layer.
Pooling is a different operation than convolution. While convolutional layers operate over the channel
dimension of your data tensor, pooling layers operate over the samples
dimension. You can imagine it as a remapping/interpolation from a fine to a coarse sampling of the sphere. Some samplings are constructed as subdivisions of a platonic solid, which makes pooling easy (as is the case for 2D grids for image data). In general (to pool/remap from any grid to any grid), you can precompute an interpolation matrix then apply it to your data tensor to pool (a linear operation over the samples
dimension). You can look at the code in deepsphere-weather.
I am involved in spherical microphone array processing and I believe that deepsphere has a lot of potential in this field.
That's interesting! Hope it'll be useful to you. :) I know DeepSphere was used in this paper for "real-time reconstruction of acoustic camera spherical maps".
@mdeff That clarifies a lot thank you! Thank you for the references as well!
You're welcome @xefonon! Keep us posted. :)
Below is a diagram I'm preparing for the deepsphere-weather paper. Hope it helps in understanding the information flow further.
That's interesting! Hope it'll be useful to you. :) I know DeepSphere was used in this paper for "real-time reconstruction of acoustic camera spherical maps".
I wrote DeepWave from this paper in PyTorch by making use of the spherical convolution layers from this repo. Thanks for the great work in the DeepSphere repo. The PyTorch DeepWave implementation repo can be found in here: https://github.com/adrianSRoman/DeepWaveTorch
A few weeks ago, I basically implemented it and did some training with using a small amount of data on DeepSphere package (based on TensorFlow 1.x ). But at that time I found that my data was too big to be read into memory. So I want to try this version implemented by PyTorch.
I understand DataLoader and other methods in PyTorch. But I am not sure what kind of data I should prepare. Should I prepare some arrays that has been sampled by HealPix? Or should I prepare some panoramic images?
I found that the image in the code seems to have only two dimensions, which are vertices and features. I think vertices is the index after sampling the spherical data, and the feature is the channel. Do I understand it right?
If so, how can I transform the (height, width, channel) image into (vertices, feature) data? Or what sampling method should I use to get the (V, F) data from spherical data in the first place?
In my network environment, the sample data is too big so I can't download it. So it's a bit difficult for me to get clues from the data directly. Sorry, I am still quite confused now. Could you give me some hints? Thanks again!