Open mikeheddes opened 1 year ago
Hi, new to torchhd but would be really interested in contributing. Is this problem still open? As I understand is it related to implementing the data structure to hold vectors as sparse arrays and then implementing MAP operations for sparse vector data structures @mikeheddes?
Hi, thank you for your interest in Torchhd. Yes, this feature has yet to be added to the library and as far as I'm aware no one is working on it yet.
One aspect that is a bit unique about this model is that we have to decide wether to use torch.sparse
tensors (which I suspect will be more efficient when working with small bundles of vectors) or to use dense tensors (which could be more efficient when working with large bundles of vectors). The main reason for this is because I'm not sure how to implement sparse circular convolution efficiently, i.e., in O(k log n) vs O(k^2) for two sparse vectors of length n with k non-zero elements. With dense vectors we can use the fast Fourier transform to compute the circular convolution in O(n log n), thus when n log n < k^2, the dense model should be more efficient.
I think it is worth trying both ways and perhaps giving users an option to convert between them if it is indeed the case that one is more efficient than the other in different settings. A good start for you before starting the implementation into Torchhd as a new VSATensor
would be to benchmark the two implementations in isolation. For this, you can start with the implementations of the random
, bundle
, and bind
methods.
Here are some example PRs to help you get started:
If you have any questions don't hesitate to ask.
Sounds like an interesting problem! I'll have a go.
Could it be useful to add conversions to non binary sparse array conversions either way?
Hi @mikeheddes,
So I've been having a bit of a think about this and my thoughts for implementation are we could go down two routes with how to sparsify the arrays of hypervectors:
1) treating -1 as implicit (in the same way 0 is implicit in binary sparse arrays) and use either coo, csr or ccs formats), in this situation we could use torch.sparse implementations 2) another thought I had is we could use run-length encoding (same compression format that parquet uses), then we wouldn't need to treat anything as implicit.
For the MAP operations this should be easy enough with torch.sparse, however permutation is slightly annoying as torch.sparse doesn't let you modify indices inplace, so we'd have to recreate the sparse array object each time, if we used this.
I need to do some benchmarks on circular convolution but I guess the big O would just be dependent on how much compression we get, would probably need to implement some thinning to compress more
A follow up feature to the support of Binary Sparse Block Codes. See discussion in #146.