Open Da1sypetals opened 3 months ago
For Visual Studio, if some codes compile but give Intellisense error, I will use the IDE macro to remove the code segment:
#ifndef __INTELLISENSE__
// The code causes error
#endif
There is no specification for IDE Intellisence exactly. So the result of different IDE is undefined, especially coming across cuda code.
About:
"In theory, the functions implemented in type_traits are host functions, which are not supported to be called in device function. "
I think, the type_traits is just compile time template deduction, so, device
or host
make no sense.
About: "In theory, the functions implemented in type_traits are host functions, which are not supported to be called in device function. " I think, the type_traits is just compile time template deduction, so,
device
orhost
make no sense.
That explains. Thanks. I used to comprehend type_traits as function (maybe a bad habit taken from python) 😭
There is no specification for IDE Intellisence exactly. So the result of different IDE is undefined, especially coming across cuda code.
That could be some internal problem of Clion then. I will probably contact jetbrains support. Thanks!
These days I tried to setup project using muda in clion using the toolchain in docker. However, Clion intellisense refuse to work. I found that this is because static code analysis in conditional compilation (
if constexpr
) choose the last branch:which results in a failure assertion and thus an error. But the code compiles just fine with the same toolchain setup in Clion.
In theory, the functions implemented in
type_traits
are host functions, which are not supported to be called in device function. I also search the documentation page related to type traits (https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#extended-lambda-type-traits) and did not find this behavior documented.My questions:
Thanks a lot in advance.