Open Dinngger opened 6 months ago
Another bug found.
import taichi as ti
ti.init(ti.cuda)
@ti.kernel
def mytest():
for _ in range(1):
x = ti.Vector.zero(ti.f32, 1)
x[0] = 1
res = ti.Vector.zero(ti.f32, 1)
for i in range(1):
res[i] = 2
x = res
print(x)
print(x[0])
mytest()
The output is
$ python3 ti_vec_func2.py
[Taichi] version 1.8.0, llvm 15.0.4, commit 0da68467, linux, python 3.8.10
[Taichi] Starting on arch=cuda
[2.000000]
1.000000
I also found a puzzling manifestation of taichi,
import taichi as ti
import numpy as np
ti.init(arch=ti.gpu)
num = int(1e9)
@ti.kernel
def speed_test()->ti.float32:
ti.loop_config(serialize=True)
s2 = 0.0
for i in range(num):
s2+=1.0
return s2
print(speed_test())
a = np.ones(num,dtype=np.float32)
res = np.array(0.0,dtype=np.float32)
a.sum(out=res)
print(res)
The output is:
[Taichi] version 1.7.1, llvm 15.0.4, commit 0f143b2f, linux, python 3.11.9
[Taichi] Starting on arch=cuda
536870912.0
1000000000.0
@Dinngger for your original post, that case is not a bug. Kernel parameters are constant, we supported the syntax because some people relied on the C-like behavior, but the results only persists within a single offload. (e.g. within a single parallel for loop)
We probably should make it a warning
For the second case, weird indeed. I would guess some bug with load to store forwarding did not detect a modification
Describe the bug Different add behavier in ti.func and ti.kernel
To Reproduce
Log/Screenshots
Additional comments None