As I wrote in my E-mail to you I was a little confused and deterred by the documentation of the CubicalComplex class. I expected that your implementation only works for 2D images and was worried that, when feeding a 3D volume the third dimension would simply be considered channel information. I think especially the explicit example in the forward() method threw me off there:
1. Tensor of `dim = 2`: a single 2D image
2. Tensor of `dim = 3`: a single 2D image with channels
3. Tensor of `dim = 4`: a batch of 2D images with channels
Now after having spend a little time reading the code and documentation again I can see that you were correctly describing the behavior of the method and there was a misunderstanding on my side. Still, I think the docs could be more inviting for higher dimensional data.
One way I thought of to improve the docs is to add an example for higher D data, or mention the usage of volumes etc in the docs.
Another way could be to slightly restructure the code and to simply set the default value of dim not to dim=None but dim=2, which would in my eyes result in the same behavior and to explain the behavior of treating "extra dimensions" of the data as channels/batch in a more general way with a specific example perhaps for 2D.
Thanks again for your help with our Project and your awesome work in the TDA field!
Hi Bastian,
As I wrote in my E-mail to you I was a little confused and deterred by the documentation of the
CubicalComplex
class. I expected that your implementation only works for 2D images and was worried that, when feeding a 3D volume the third dimension would simply be considered channel information. I think especially the explicit example in theforward()
method threw me off there:Now after having spend a little time reading the code and documentation again I can see that you were correctly describing the behavior of the method and there was a misunderstanding on my side. Still, I think the docs could be more inviting for higher dimensional data.
One way I thought of to improve the docs is to add an example for higher D data, or mention the usage of volumes etc in the docs.
Another way could be to slightly restructure the code and to simply set the default value of dim not to
dim=None
butdim=2
, which would in my eyes result in the same behavior and to explain the behavior of treating "extra dimensions" of the data as channels/batch in a more general way with a specific example perhaps for 2D.Thanks again for your help with our Project and your awesome work in the TDA field!