Open SerodioJ opened 2 months ago
Hi, @thaisacs .
Thanks for pointing out this issue with the script. I ended up running in a Python notebook and replaced some variable names prior to putting it here (data_tvm
-> cpu_input
) to better comprehensibility.
Regarding the different precisions between architectures, I understand they exist, at the beginning I thought this was the root cause, but this difference holds even if you replace NumPy with CuPy and the resulting behaviour close to PI/2 is extremely problematic.
When generating a CUDA source file, TVM prints floating pointing values as I listed in my previous comment, it treats 64 and 32 bit values the same (scientific notation). The rounding of the floating point value that happens when using scientific notation may not impact 32 bit values that much, but its far from ideal for 64 bit values (3.141592653589793 -> 3.141593e+00).
I adapted this part of the codegen for CUDA and now the results are much closer, and the issue for values close to PI/2 is gone. The changes are here.
I have been working with TVM+Ansor (auto-scheduler) to generate code for a set of operators for both CPU (LLVM backend) and GPU (CUDA backend). The operators use trigonometric functions in some steps, and I set the value of PI with
pi_const = te.const(np.pi, X.dtype)
. One thing I noticed was that CPU and GPU results were diverging.I started to check what could be the source of this issue in my code and I found out that COS and SIN were yielding different values, which led me to believe it was a problem in the scheduling or code generation steps.
To check if the schedule exploration with Ansor was in some way causing this, I tested similar operators with AutoTVM, and the same problem was evident.
The only thing left to check was the code generation pipeline so I started to check the
codegen
source code and found out what I believe is the root cause for this behavior. When generating CUDA code, FloatImm are treated as shown in the code snipped extracted from codegen_cuda.ccSo Float32 and Float64 (double) are being treated the same and when generating the source code a value such as 3.141592653589793 is being reduced to 3.141593e+00. And this precision loss due to string conversion when generating the CUDA source code leads to the problem I am having. I tried changing the
case 64
rule to have something likeand the results start to converge.
I believe this issue also happens with other backends such as C, but when using the LLVM backend there is no issue.
Expected behavior
Close COS and SIN values for CPU (LLVM) and GPU (CUDA).
Actual behavior
Divergent values - only CPU results match results obtained with the NumPy ground truth. The output below can be obtained using the code listed in Steps to reproduce
The main issue here for my use case is the COS of PI/2 which is resulting in a negative number. This value matches
np.cos(3.141593/2)
which is the value to which the float constant is being rounded to when printing it in scientific notation (3.141593e+00).Steps to reproduce
Here is a couple of simplified modules that reproduce this issue
Also the CUDA code produced is listed below, which shows that 3.141592653589793 (
np.pi
) is being changed to 3.141593e+00.Triage