Open fermicro opened 8 years ago
Hi, Most of the stuff. But not all. What specific operations or features are you looking for?
I don't exactly know yet, I'll probably get more hands-into code next year. For now I'm researching a viable tool.
Do you have any prediction when your publication will be ready? You'd probably get 1 citation :)
Would it be possible if you explain to me examples of situations or operations I could not use Urutu?
Also, you commented on Twitter about Urutu's OpenCL being a bit slate, is that right?
Sievers
Hi, I may not be publishing it (may be forever). If you are interested, you can contribute to OpenCL backend and join as a co-author. Stale I meant, most of the ideas are implemented on CUDA backend.
For around this year and next first portion I'll be focused on another work, which is a bit more high level for now, but I'm interested.
Would it be possible if you explain to me examples of situations or operations I could not use Urutu with OpenCL?
Hi, Its mostly, images, graphics interop, samplers and other fancy stuff.
Hey, thanks for your support.
I'll mostly be doing calculations, with no graphics for now, then I believe I'd be good with Urutu CL.
Do you have an example with parallelism?
Hi, Here are the samples (I haven't tested them in a while) [1]. I am planning for a new system do my gpu development better. [1]. https://github.com/urutu/Urutu/tree/master/samples
Hello again!
I am mostly taking as a reference your poster in GPU Conference and sample codes. If there is anything else I could study to learn more, please refer me!
Would you mind enlightening these for me, taking on account your first sample code:
@Urutu("CL")
def divmul(a, b, c, d):
__global is x, y
x = a[0:100]
y = b[0:100]
t, u, v, w = 10, 10.0, 'opencl', "open.cl"
c[tx] = x[tx] / y[tx]
d[tx] = x[tx] * y[tx]
return c, d
t
, u
, v
, w
, and tx
stand for?bx
or Tid
are used, what do those stand for?@Urutu("CL")
, while others as @Urutu("GPU")
. What exactly is their functioning? Which one one should use for running an OpenCL code in a GPU?def
, for
, if
etc. are supported. That means I could perform virtually anything I could ask for, inside an decorated @Urutu
function. Are loops and other instructions dynamically handled inside the library so that code is parallelized within available cores? I have worked with parallelization with OpenMP and MPI, so I'd like to know whether code in Urutu will be automatically parallelized to Compute Units or that should be set manually.Thank you
Sorry for delayed response.
Hope it helps.
Hello, I'm writing my undergraduate final paper with OpenCL and I'm very interested in using Urutu.
I saw that the your paper is still under writing!
Does Urutu with OpenCL support any operation I would achieve normally with standard PyOpenCL?
Sievers