Open gully opened 11 months ago
I attempted a Conv2d
on a sparse matrix and it does not work. The dense matrix worked, but not sparse. Bummer! I spent a while thinking about whether we could directly build the circulant matrix, similar to scipy's covolution_matrix function: the matrix that performs the function of a convolution. That way we could use sparse matrix multiplication. There did not appear to be an out-of-the-box function for this in PyTorch, though I found a few stack overflow answers and GitHub Issues that had a working function.
I then decided I may not need to go through the trouble depending on how we handle the sparsity in the first place. TBD.
Our experience with blasé is that sparse tensors offer large speedups, so long as the pixel coordinates are fixed.
This fixed pixel coordinate provision make a nearly circular inference: we need to know approximately where the echelle order and trace are going to land before we refine the model. That should be doable after we have a big repertoire of analyzed images, but configuring the first one may be tricky.
Anyways, before we worry about all that we first have to design the sparsity and experiment with how to write it down etc.