Open Kugelstadt opened 4 years ago
Read an write accesses are much slower compared to a dense tensor (CPU backend).
Hi @Kugelstadt , unfortunately there is a somewhat large overhead for sparse tensors like dynamic
and bitmasked
. Internally, Taichi has to pre-run a series of steps in order to correctly generate the sparsely info at each hierarchy in the structure node tree. That is, for each level in the tree, it needs to compute which elements are activated, in order for the loop at the leaf node: for i in x
to only cover the activated indices. My personal impression is that the sparsity feature is beneficial only when the percentage of tensors being activated is really small. Hopefully @yuanming-hu could add more comments on this.
Please add some documentation on the dynamic tensors. Are they imlemented as lists as the example suggests or are they dynamic arrays as in the taichi paper?
I used them as demonstrated in the lists example to store particle postions: ti.root.dynamic(ti.k, n_particles).place(x)
Read an write accesses are much slower compared to a dense tensor (CPU backend). For dynamic arrays I would expect no difference to a dense tensor as long as the size does not change.