nim-lang / Nim

Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula. Its design focuses on efficiency, expressiveness, and elegance (in that order of priority).
https://nim-lang.org
Other
16.59k stars 1.47k forks source link

Rose and Jit kernel complier support #8330

Closed develooper1994 closed 6 years ago

develooper1994 commented 6 years ago

1) http://rosecompiler.org/ Rose complier is an source to source translator. It takes input as c,c++ code and translates device kernel. I tried some mathmatical formulas and it works. I am also matlab cuda complier user. Rose can translate cuda to opencl in some cases. 2) Jit kernel compitation inpired from pyton numba and accelerate and openhmpp. I just want to use with pragram like syntax. to parallelize on device. I am still learning nim.

andreaferretti commented 6 years ago

What is exactly the issue? I honestly do not understand what you are asking for

develooper1994 commented 6 years ago

Autoparallelization with compile to Opencl,cuda or device backend.

mratsim commented 6 years ago
  1. You can try to use cudanim which was developed for High-Energy Physics framework QEX

  2. Numba is slower than Arraymancer. We don't need JIT when we have static compilation.

Regarding ROSE, is it used in production? The project seems much less maintained, structured and optimized than Halide (used for computational photography and by FB for deep learning for example) and it does not support ARM backend.

Like for https://github.com/nim-lang/Nim/issues/8331, Rose should be done in a separate independant repo.

develooper1994 commented 6 years ago

I don't want to use any cuda and opencl library. İ just want to say to complier "hey! Come here and transcompile to my code to device kernel as much as possible". Device can be CPU,gpu,FPGA,tensor processor,DSP calculator,... There is lots of usage in Real world like c/c++->VHDL

andreaferretti commented 6 years ago

If I recall correctly, there was a discussion about having OpenCL as a compilation target for Nim (possibly in one of the summer of code proposals?), but it was never implemented. Even if this could ever work, many features of Nim (mainly about heap allocations, and hence much of the standard library) would not work in such a target.

But Nim could still be useful wrt to raw C thanks to its metaprogramming capabilities.

If I recall correctly, there was some attempt to generate code using macros for vertex and pixel shaders - probably the same approach would work for CUDA or OpenCL kernels

develooper1994 commented 6 years ago

Yes, ı am saying that OpenCL as a compilation target for Nim. Allocate buffer in device instead of ram, pure functions or function queries after some analisis compile to device kernel. Opencl is much more portable than cuda and VHDL.