Closed develooper1994 closed 6 years ago
What is exactly the issue? I honestly do not understand what you are asking for
Autoparallelization with compile to Opencl,cuda or device backend.
You can try to use cudanim which was developed for High-Energy Physics framework QEX
Numba is slower than Arraymancer. We don't need JIT when we have static compilation.
Regarding ROSE, is it used in production? The project seems much less maintained, structured and optimized than Halide (used for computational photography and by FB for deep learning for example) and it does not support ARM backend.
Like for https://github.com/nim-lang/Nim/issues/8331, Rose should be done in a separate independant repo.
I don't want to use any cuda and opencl library. İ just want to say to complier "hey! Come here and transcompile to my code to device kernel as much as possible". Device can be CPU,gpu,FPGA,tensor processor,DSP calculator,... There is lots of usage in Real world like c/c++->VHDL
If I recall correctly, there was a discussion about having OpenCL as a compilation target for Nim (possibly in one of the summer of code proposals?), but it was never implemented. Even if this could ever work, many features of Nim (mainly about heap allocations, and hence much of the standard library) would not work in such a target.
But Nim could still be useful wrt to raw C thanks to its metaprogramming capabilities.
If I recall correctly, there was some attempt to generate code using macros for vertex and pixel shaders - probably the same approach would work for CUDA or OpenCL kernels
Yes, ı am saying that OpenCL as a compilation target for Nim. Allocate buffer in device instead of ram, pure functions or function queries after some analisis compile to device kernel. Opencl is much more portable than cuda and VHDL.
1) http://rosecompiler.org/ Rose complier is an source to source translator. It takes input as c,c++ code and translates device kernel. I tried some mathmatical formulas and it works. I am also matlab cuda complier user. Rose can translate cuda to opencl in some cases. 2) Jit kernel compitation inpired from pyton numba and accelerate and openhmpp. I just want to use with pragram like syntax. to parallelize on device. I am still learning nim.