taichi-dev / taichi

Productive, portable, and performant GPU programming in Python.
https://taichi-lang.org
Apache License 2.0
25.38k stars 2.27k forks source link

Documentation of dynamic tensors #896

Open Kugelstadt opened 4 years ago

Kugelstadt commented 4 years ago

Please add some documentation on the dynamic tensors. Are they imlemented as lists as the example suggests or are they dynamic arrays as in the taichi paper?

I used them as demonstrated in the lists example to store particle postions: ti.root.dynamic(ti.k, n_particles).place(x)

Read an write accesses are much slower compared to a dense tensor (CPU backend). For dynamic arrays I would expect no difference to a dense tensor as long as the size does not change.

k-ye commented 4 years ago

Read an write accesses are much slower compared to a dense tensor (CPU backend).

Hi @Kugelstadt , unfortunately there is a somewhat large overhead for sparse tensors like dynamic and bitmasked. Internally, Taichi has to pre-run a series of steps in order to correctly generate the sparsely info at each hierarchy in the structure node tree. That is, for each level in the tree, it needs to compute which elements are activated, in order for the loop at the leaf node: for i in x to only cover the activated indices. My personal impression is that the sparsity feature is beneficial only when the percentage of tensors being activated is really small. Hopefully @yuanming-hu could add more comments on this.