area515 / Photonic3D

Control software for resin 3D printers
http://photonic3d.com
GNU General Public License v3.0
133 stars 115 forks source link

Is nanoDLP right on those points? #141

Closed kloknibor closed 8 years ago

kloknibor commented 8 years ago

Hi!

This is just a curiosity question. On buildyourownSLA an maker of NANO DLP posted this :

nanoDLP is compiled for arm7 linux so would not run on linux x86-64. It is possible to compile it for other oses and architectures but some modules need rewrite to make them work.

Different applications have different focus. Except general features which are easy to compare. What makes nanoDLP unique compare to other available solutions is also its biggest weakness.

Image Almost all of the other solutions run on desktop environments, some of them are cross platform as they use Java or Qt c++. But in contrast to them, nanoDLP does not run on desktop environments. Some of the most critical parts directly talk with firmware layer.

So as a result.

See : http://www.buildyourownsla.com/forum/viewtopic.php?f=3&t=3772&start=50#p14288

Is he right on the speed/synchronization/reliability points? For some reason I thin he exaggerate the points...

(Don't worry I will stick to CWH either way! I'm totally happy with it)

jmkao commented 8 years ago

It's unlikely NanoDLP is that complicated of a code base. If it were open source, someone could probably port it. But given the speed that the author is putting out new features, what they're saying is that they don't have time to port it themselves, which is probably true.

The speed and synchronization comments really don't apply to any of the current printers out there, but only to this hypothetical Carbon3D case. Since nobody seems to be able to source Teflon AF film, and those who have claimed to find distributors report exorbitant prices, I doubt this matters to anyone at this point.

However, in the future, should the capability appear, it seems unlikely that you could print so fast that 10ms layers would matter. The RPi has hardware accelerated video playback and can easily do 1080p at 30FPS, which would mean 33ms layers, and should be plenty fast. Perhaps in an early prototype phase, some people might try using image layer based slicers for continuous printing, but you would most likely be much better served by a different image handling pipeline. It would be easy for almost any system to delegate the rendering to the RPi's onboard capabilities and focus on filling the video playback pipeline from code.

You probably wouldn't even need any changes to firmware, it's possible that some SLA manufacturers patched their arduino firmware, but a lot of SLA manufacturers don't really understand the Marlin code and capabilities that well. G4 should be easily sufficient and fast enough for synchronization points.

Reliability, well... the weak link in reliability in my experience is not the PC. The weak link is cheap RAMPS daughterboards, cheap Arduino Megas, and cheap UARTs. Diagnosing my recent printer instabilities, I found that after resolving the power issue by moving my USB camera to a hub, the camera worked reliably but my USB connection stability sucked. The culprit turns out to be that the RPi really disagrees with the RS232-USB uart used to connect my projector to the RPi.

There is one thing that nanoDLP does that is pretty unique, which is its ability to control stepper drivers directly from the RPi GPIO pins, allowing the RPi itself to be the motion control firmware rather than using an Arduino+Marlin based G-Code controller. Pulsing the pins manually probably requires some pretty tight code given that the RPi doesn't have a real-time OS, so that's pretty cool. Not sure that we need this feature though...

WesGilster commented 8 years ago

I actually enjoy these conversations quite a bit, although sadly JMKao has beat me to the punch... :( This conversation is generally going in the direction that the C vs. Java conversation usually goes, so obviously I'm not going to shy away from that. On the other hand, I'll also try to add some more depth to the conversation, and fill in a couple of things that JMKao didn't mention.

Portability: He's pretty much right here... Java runs on anything from Android to AS/400 or HP non-stop and anything in between(many times without modification).

Speed: This one is completely loaded...

  1. I haven't seen the "layer change" requirements from Carbon3D. I'm not sure that anyone has.
  2. I'm not sure how precisely he is defining "layer change". Because CWH runs it's operations in parallel, layer changes are based on three extremely fast synchronous steps: Runnable submission + wait()/notify() sync operations + a single display refresh. The wait()/notify() operations are measured in high nano seconds on something like a Raspberry Pi and the other operations can't be more than a few ms as well. That means I assume he defines "layer change" to mean the longest running action that is executed during a slice? That's good news, because that measurement is completely irrelevant in CWH(as long as the exposure time is longer). Once again, CWH runs those operations in parallel to exposure times. That's how I'm getting away with performing slicing of STLs during this time. I will admit that STL slice-on-the-fly is not yet possible for Carbon3D, but nobody has even attempted slice on the fly accept for CWH anyway. If anyone is interested in seeing the timing of those three operations with a NOOP parallel, I could put that together. Let's just say I'm not concerned with performance...
  3. Now, let's ignore my first two points, CWH's parallel processing, and JMKao's clever video pipelining theory and just accept this guys numbers at face value with my above definition of "layer change". We've all seen the video. Is Carbon3D printing at 10ms per layer? Run the video again and you can find out. I can't remember what their layer thickness is. Let's say it's 100 microns. If 10ms has somehow become the requirement for Carbon3D printing, that means they are printing at 10mm per second. The 3d printing industry could only wish...

Synchronization: His statement could be true, could be false, but is really too vague to determine what hardware he's talking about or what timing. Both applications are running on a non-realtime OS, which means neither of us are guaranteed predictable time slices. That further means that if either of our applications are preempted in the middle of a precisely timed operation, we could be in trouble. The fact of the matter is that you can count on Java performing it's GPIO operations at a consistently slower speed than C(http://benchmarksgame.alioth.debian.org/u64q/java.html) just as Java will generally outperform Python(https://benchmarksgame.alioth.debian.org/u64q/python.html). The fact that the current language of choice for GPIO in the Raspberry Pi community is Python, this argument becomes pretty academic without specifics. So in the event that there is some piece of hardware that evades Java's performance, I'll skip right over C and implement the function in assembler and link through JNI or JNA. Then we'll be on the other side of the speed vs. portability conversation and I'll have to defend why I chose 10 more nanoseconds of performance over using C... (Just in case you didn't catch it, that was a joke. I'd implement the method in C before assembler.)

Reliability: And then we come to the most difficult thing to measure. I'm not sure how Java and C really play into this conversation though. JMKao had some interesting insight in the hardware reliability area, but in my experience it's software that causes the vast majority of production outages. Badly written software causes outages, so don't write bad software...

I don't have an account on buildyourownsla so fee free to repost or refer if you feel it would help.

kloknibor commented 8 years ago

Thanks for the complete explenation! Appreciate it but this can be closed ;)! keeps everything more organized :)!