Open xuhao1 opened 2 years ago
I found for the same kernel inputting ScalarField/VectorField (sparse) with the same size but different instance(address), it will cause recompiling. This also cause low performance. It looks like taichi uses pointer of ti.template() class as the hash and if address changes it will recompile the kernel. For example, I wrap some algorithm and scalarfield data in a class and create different instances of this class https://github.com/taichi-dev/taichi/issues/5376. What I want is to avoid JIT (and also loading cache from disk) when to use a kernel with the same size/datatype.
Personally, I think this feature is also very important as people want to use taichi more flexibly. Also, it's essential to use taichi in an OOP way (every time of instancing a class needs JIT is too slow!). For me, it's necessary for implementing the submap feature in TaichiSLAM https://github.com/taichi-dev/taichi/issues/5380
A sample code:
import taichi as ti
ti.init()
@ti.kernel
def test(x: ti.template()):
#Do something with x
pass
x = ti.field(dtype=ti.i32)
y = ti.field(dtype=ti.i32)
B0 = ti.root.pointer(ti.ijk, (3, 1, 1)).dense(ti.ijk, (1, 2, 2))
B0.place(x)
B1 = ti.root.pointer(ti.ijk, (3, 1, 1)).dense(ti.ijk, (1, 2, 2))
B1.place(y)
test(x)
#When calling test(y), taichi will recompile the kernel, this should be avoided.
test(y)
@ti.data_oriented
class AClass:
def __init__(self) -> None:
x = ti.field(dtype=ti.i32)
B0 = ti.root.pointer(ti.ijk, (3, 1, 1)).dense(ti.ijk, (1, 2, 2))
B0.place(x)
self.x = x
self.B0 = B0
@ti.kernel
def work(self):
# do sth with self.x
self.x[0, 1, 2] = 3
a = AClass()
a.work()
b = AClass()
#When calling b.work, taichi will recompile the kernel, this should be avoided.
b.work()
Describe the bug I found when instancing a taichi class (python class labeled by @ti.data_oriented) it is always called JIT. And the performance is low even I open offline_cache=True
To Reproduce Use TaichiSLAM
Log/Screenshots
Additional comments Is that possible to skip the JIT when instance same class?