dlwh / puck

Puck is a lightning-fast parser for natural languages using GPUs
www.scalanlp.org
Apache License 2.0
248 stars 29 forks source link

It looks like puck depends on OpenCL and not CUDA, since you are reverencing javacl ? #3

Closed kk00ss closed 9 years ago

kk00ss commented 9 years ago

please comment

kk00ss commented 9 years ago

I see you use CLContext which is specific for OpenCL.

dlwh commented 9 years ago

That's right. Though it's heavily optimized for Nvidia devices. It's known to run on Intel's implementation, but not particularly well. (I originally wanted to look at implementation for all three.

On Wed, Mar 25, 2015 at 6:12 AM, kk00ss notifications@github.com wrote:

I see you use CLContext which is specific for OpenCL.

— Reply to this email directly or view it on GitHub https://github.com/dlwh/puck/issues/3#issuecomment-86017519.

kk00ss commented 9 years ago

But guys with Radeons may be able to run it, so it would be better to inform them that it depends on OpenCL, but kernel itself is Nvidia optimized. I'm saying that because I bought new card for using it with Puck, and spent quite a lot of time configuring CUDA on Linux. Could you please change description.

dlwh commented 9 years ago

Fair enough.

On Wed, Mar 25, 2015 at 12:17 PM, kk00ss notifications@github.com wrote:

But guys with Radeons may be able to run it, so it would be better to inform them that it depends on OpenCL, but kernel itself is Nvidia optimized.

— Reply to this email directly or view it on GitHub https://github.com/dlwh/puck/issues/3#issuecomment-86179384.

kk00ss commented 9 years ago

It would be better description of a project if you substituted: It's (currently) designed for use with grammars trained with the Berkeley Parser and on NVIDIA cards. with: It's (currently) designed for use with grammars trained with the Berkeley Parser, OpenCL API, but kernel was optimized for NVIDIA cards.

Original is confusing, people who can start using Puck with their iGPUs see it and think they need to buy NVIDIA GPU. Thanks

dlwh commented 9 years ago

i suppose. but it's likely to be no faster than the normal Berkeley Parser under those circumstances

On Thu, Mar 26, 2015 at 3:41 PM, kk00ss notifications@github.com wrote:

It would be better description of a project if you substituted: It's (currently) designed for use with grammars trained with the Berkeley Parser and on NVIDIA cards. with: It's (currently) designed for use with grammars trained with the Berkeley Parser, OpenCL API, but kernel was optimized for NVIDIA cards.

Original is confusing, people who can start using Puck with their iGPUs see it and think they need to buy NVIDIA GPU. Thanks

— Reply to this email directly or view it on GitHub https://github.com/dlwh/puck/issues/3#issuecomment-86743278.