slic3r / Slic3r

Open Source toolpath generator for 3D printers
https://slic3r.org/
GNU Affero General Public License v3.0
3.32k stars 1.29k forks source link

OpenCL / GPU slicing #985

Closed thecrazy closed 11 years ago

thecrazy commented 11 years ago

In the race for the fastest slicer, has anyone thought of implement OpenCL?

I know nothing in programming but I do know that things like seti at home, folding at home and bitcoin mining are at least 10 times faster when running on the GPU. I mine bitcoins myself and its WAY faster on the GPU even compared to my quad core (i5)

And the good thing is that every major graphic chip maker supports it these days:

AMD: http://developer.amd.com/tools/heterogeneous-computing/amd-accelerated-parallel-processing-app-sdk/#two

NVIDIA: https://developer.nvidia.com/opencl

Intel: http://software.intel.com/en-us/vcsource/tools/opencl-sdk

Caanon commented 11 years ago

The GPU sure does have some horsepower behind it, but as it isn't a general purpose GPU it has some significant limitations. The take-home message on the GPU is that if you can massage your algorithm to fit within the limitations of the GPU's architecture, then you'll likely see some speed-ups. That said, that's a pretty big "if".

In the case of slic3r, it leverages a library called Clipper [1] for most of the geometry processing, and most of the mathematical parts of that library must be converted to the GPU language of choice before slic3r could take advantage of it. That's not to say that it can't be done, but it'll be a HUGE undertaking.

In short: good idea, probably not worth the hours to invest in it for the moment, as slic3r needs some attention in other areas like it's medial-axis branch and thin-wall handling.

[1] http://sourceforge.net/projects/polyclipping/

thecrazy commented 11 years ago

I had the feeling this would be a huge piece of work.

I completely agree that slic3r needs work in other areas before touching this.

Just wanted to make sure it was on the table as it could drastically improve slicing times one day.

On Thu, Feb 7, 2013 at 3:48 PM, Caanon notifications@github.com wrote:

The GPU sure does have some horsepower behind it, but as it isn't a general purpose GPU it has some significant limitations. The take-home message on the GPU is that if you can massage your algorithm to fit within the limitations of the GPU's architecture, then you'll likely see some speed-ups. That said, that's a pretty big "if".

In the case of slic3r, it leverages a library called Clipper [1] for most of the geometry processing, and most of the mathematical parts of that library must be converted to the GPU language of choice before slic3r could take advantage of it. That's not to say that it can't be done, but it'll be a HUGE undertaking.

In short: good idea, probably not worth the hours to invest in it for the moment, as slic3r needs some attention in other areas like it's medial-axis branch and thin-wall handling.

[1] http://sourceforge.net/projects/polyclipping/

— Reply to this email directly or view it on GitHubhttps://github.com/alexrj/Slic3r/issues/985#issuecomment-13258741.

mesheldrake commented 11 years ago

Thanks for suggesting this. There is lower hanging fruit, but at some point it might be worth considering for a few tasks.

machinekoder commented 10 years ago

I would also be interested in GPU slicing. We are currently working on ARM boards with very limited CPU power to control 3D printers. Using the GPU of these boards would fasten up the slicing for sure more than 10 times. A work around for this is currently to use cloud processing or to wait.