ThoughtWorksInc / Compute.scala

Scientific computing with N-dimensional arrays
Apache License 2.0
200 stars 18 forks source link

Scaladoc for methods in Tensor #132

Open Atry opened 6 years ago

Atry commented 6 years ago

The Scaladoc should be similar to numpy's API reference

bmaso commented 4 years ago

I'mm interested in GPU programming in Scala (specifically for speeding up a reaction-diffusion modelling package).

I wouldn't mind contributing some documentation in order to get up to speed in Compute.scala.

Is this still a good package to use for GPU programming in Scala, or maybe there's a better one out there?

Also, how can I run the benchmarks locally?

Atry commented 4 years ago

Sorry for late reply.

I have left ThoughtWorks, and I would not add new features to this project unless some other one contributes the feature. The problem for you to use it in your project is that it lacks of high-level constructs. Even matrix multiplication is not implemented in the library. Instead, the benchmark provides an example implementation. If you suppose to directly use OpenCL or CUDA for your project, then Compute.scala could be an alternative because it provides a thin framework to let you create your customized kernels in Scala with the help of JIT. However, if you need higher level constructs, then Java binding of BLAS / cuBLAS / PyTorch / TensorFlow might be something you are looking for.

Brian Maso notifications@github.com 于2020年3月17日周二 上午10:18写道:

I'mm interested in GPU programming in Scala (specifically for speeding up a reaction-diffusion modelling package https://darrenjw.wordpress.com/2019/01/22/stochastic-reaction-diffusion-modelling/%5D= ).

I wouldn't mind contributing some documentation in order to get up to speed in Compute.scala.

Is this still a good package to use for GPU programming in Scala, or maybe there's a better one out there?

Also, how can I run the benchmarks locally?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ThoughtWorksInc/Compute.scala/issues/132#issuecomment-600194571, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAES3OVCKFJKDXZX5THQVQTRH6WFHANCNFSM4EX7XDCQ .

Atry commented 4 years ago

BTW: you can run the benchmarks with the following sbt command:

sbt> project benchmarks sbt> jmh:run

Yang, Bo pop.atry@gmail.com 于2020年4月2日周四 下午1:08写道:

Sorry for late reply.

I have left ThoughtWorks, and I would not add new features to this project unless some other one contributes the feature. The problem for you to use it in your project is that it lacks of high-level constructs. Even matrix multiplication is not implemented in the library. Instead, the benchmark provides an example implementation. If you suppose to directly use OpenCL or CUDA for your project, then Compute.scala could be an alternative because it provides a thin framework to let you create your customized kernels in Scala with the help of JIT. However, if you need higher level constructs, then Java binding of BLAS / cuBLAS / PyTorch / TensorFlow might be something you are looking for.

Brian Maso notifications@github.com 于2020年3月17日周二 上午10:18写道:

I'mm interested in GPU programming in Scala (specifically for speeding up a reaction-diffusion modelling package https://darrenjw.wordpress.com/2019/01/22/stochastic-reaction-diffusion-modelling/%5D= ).

I wouldn't mind contributing some documentation in order to get up to speed in Compute.scala.

Is this still a good package to use for GPU programming in Scala, or maybe there's a better one out there?

Also, how can I run the benchmarks locally?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ThoughtWorksInc/Compute.scala/issues/132#issuecomment-600194571, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAES3OVCKFJKDXZX5THQVQTRH6WFHANCNFSM4EX7XDCQ .